Re: [Wien] Bader charge

2022-11-11 Thread leila mollabashi
Dear Prof. Laurence Marks and all,

> cannot reproduce your results.

In addition to the previous link:
https://www.mediafire.com/file/o6ceodngk93mh72/aim.rar/file

I have uploaded the initialize file to:
https://www.mediafire.com/file/3tb9hg2i09fn5ez/batio3.tar/file

The calculation has been performed by WIIEN2k_21.1. For “x aim” both 21.1
and 18.2 were used.

> I ran them with both 21.1 and a pre-release version of 22.1 and the
results are almost the same.

Thank you.

>If you look at your *.outputaim, it is clear that something is badly
wrong.

That’s right.

>Please check that you have the values in your case.inaim correct. Maybe
there is something wrong with the clmsum/*.in* files etc that you used?

I cannot find the source of the error. Would you please guide me?

Leila Mollabashi

On Fri, Nov 4, 2022 at 11:53 PM Laurence Marks 
wrote:

> I cannot reproduce your results. I ran them with both 21.1 and a
> pre-release version of 22.1 and the results are almost the same. They
> are
>
> Rhombohedral Cell
> ../Ba.aim::RHOTOT for IND-ATOM   1  Z= 56.0  CHARGE:  54.48948  Z
> - Charge:   1.51052
> ../O.aim::RHOTOT for IND-ATOM   3  Z=  8.0  CHARGE:   9.24294  Z -
> Charge:  -1.24294
> ../Ti.aim::RHOTOT for IND-ATOM   2  Z= 22.0  CHARGE:  19.77893  Z
> - Charge:   2.22107
>
> Your cubic cell
> Ba.aim::RHOTOT for IND-ATOM   1  Z= 56.0  CHARGE:  54.47591  Z -
> Charge:   1.52409
> O.aim::RHOTOT for IND-ATOM   3  Z=  8.0  CHARGE:   9.25386  Z -
> Charge:  -1.25386
> Ti.aim::RHOTOT for IND-ATOM   2  Z= 22.0  CHARGE:  19.74700  Z -
> Charge:   2.25300
>
> If you look at your *.outputaim, it is clear that something is badly
> wrong. Look at the end and you will see that there are NaN values, and
> earlier some warnings about the radii. Please check that you have the
> values in your case.inaim correct. Maybe there is something wrong with
> the clmsum/*.in* files etc that you used?
>
> On Fri, Nov 4, 2022 at 2:25 PM leila mollabashi 
> wrote:
> >
> > Dear Wien2k developers and users,
> >
> > I would like to calculate Bader charges in BaTiO3. The input and output
> files are uploaded to
> https://www.mediafire.com/file/o6ceodngk93mh72/aim.rar/file for your kind
> consideration.
> >
> > I have run a PBE-GGA calculation using 1000 k-points, RMTKmax =7, Gmax =
> 12 (Bohr)-1 employing WIIEN2k_21.1 and then executed “x aim” by the third
> part of case.inaim from SRC_templates. The calculated charges are:
> >
> > Ba: 2.91, Ti: 2.68, O: 0.299
> >
> > The results cannot satisfy the stoichiometry of the compound because of
> the positive charge wrongly calculated for O, i.e., 2.91+2.68+3*0.299=6.487
> != 0
> >
> > Then, to improve the results, I increased the LM in the original
> calculation with no optimistic effect.
> >
> > Accidently, I found that the Bader charges of BaTiO3 can be perfectly
> improved using “x aim” of WIIEN2k_18.2 as follows:
> >
> > Ba: 1.52, Ti: 2.23, O: -1.25
> >
> > The above results, as calculated by the older version 18.2, not only
> give a negative value for the charge of oxygen but also perfectly lead to
> zero taking the stoichiometry of the compound into account, i.e.,
> 1.52+2.23+3*(-1.25) = 0.
> >
> > For sure, I checked the 14 and 16.1 versions of the WIEN2k code and
> found correct results the same as WIIEN2k_18.2. This shows that most likely
> something is different in the older versions WIIEN2k_18.2, WIIEN2k_16.1,
> and WIIEN2k_14 compared to the latest version WIIEN2k_21.1?
> >
> > I also checked LaCrO3, and found the same dissonancy. The Bader charges
> were calculated for LaCrO3 to be La: 2.08, Cr: 1.65, O:  -1.24  in Ref.
> [Energy Environ. Sci., 2011, 4, 4933]. These results also correctly lead to
> zero approximately: 2.08+1.65+3*(-1.24) ~ 0.01.
> >
> > Would you, please, have a look at this issue and let us know the source
> of the above discrepancy?
> >
> > Sincerely yours,
> >
> > Leila Mollabashi
> >
> > ___
> > Wien mailing list
> > Wien@zeus.theochem.tuwien.ac.at
> > http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> > SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
>
>
> --
> Professor Laurence Marks
> Department of Materials Science and Engineering
> Northwestern University
> www.numis.northwestern.edu
> "Research is to see what everybody else has seen, and to think what
> nobody else has thought", Albert Szent-Györgyi
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] Bader charge

2022-11-04 Thread leila mollabashi
Dear Wien2k developers and users,

I would like to calculate Bader charges in BaTiO3. The input and output
files are uploaded to
https://www.mediafire.com/file/o6ceodngk93mh72/aim.rar/file for your kind
consideration.

I have run a PBE-GGA calculation using 1000 k-points, RMTKmax =7, Gmax = 12
(Bohr)-1 employing WIIEN2k_21.1 and then executed “x aim” by the third part
of case.inaim from SRC_templates. The calculated charges are:

Ba: 2.91, Ti: 2.68, O: 0.299

The results cannot satisfy the stoichiometry of the compound because of the
positive charge wrongly calculated for O, i.e., 2.91+2.68+3*0.299=6.487 !=
0

Then, to improve the results, I increased the LM in the original
calculation with no optimistic effect.

Accidently, I found that the Bader charges of BaTiO3 can be perfectly
improved using “x aim” of WIIEN2k_18.2 as follows:

Ba: 1.52, Ti: 2.23, O: -1.25

The above results, as calculated by the older version 18.2, not only give a
negative value for the charge of oxygen but also perfectly lead to zero
taking the stoichiometry of the compound into account, i.e.,
1.52+2.23+3*(-1.25) = 0.

For sure, I checked the 14 and 16.1 versions of the WIEN2k code and found
correct results the same as WIIEN2k_18.2. This shows that most likely
something is different in the older versions WIIEN2k_18.2, WIIEN2k_16.1,
and WIIEN2k_14 compared to the latest version WIIEN2k_21.1?

I also checked LaCrO3, and found the same dissonancy. The Bader charges
were calculated for LaCrO3 to be La: 2.08, Cr: 1.65, O:  -1.24  in Ref.
[Energy Environ. Sci., 2011, 4, 4933]. These results also correctly lead to
zero approximately: 2.08+1.65+3*(-1.24) ~ 0.01.

Would you, please, have a look at this issue and let us know the source of
the above discrepancy?

Sincerely yours,

Leila Mollabashi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] MPI Error

2021-06-19 Thread leila mollabashi
Dear all WIEN2k users,

>The recommended option for mpi version 2 (all modern mpis) is to set
MPI_REMOTE to zero. The mpirun command will be issued on the original
node, but the lapw1_mpi executables will run as given in .machines.

>This should solve your problem.

Now mpi and k-point parallel simultaneously run but k-point parallel
without mpi does not work using interactive mode or in interactive mode  by
“set mpijob=1” in submit.sh script.  For example, with the following
.machines file:

1:e2236

1:e2236

1:e2236

1:e2236

the following error appeared:

bash: lapw1: command not found

My question is whether I can use two different installations of wien2k for
performing two different calculations at the same time?

Sincerely yours,

Leila


On Sat, May 29, 2021 at 7:07 PM Peter Blaha 
wrote:

> The difference beteen lapw0para and lapw1para is that
> lapw0para always executes mpirun on the original node, lapw1para maybe not.
>
> The behavior of lapw1para depends on MPI_REMOTE (set in
> WIEN2k_parallel_options in w2k21.1 (or parallel_options earlier).
> With MPI_REMOTE=1 it will first issue a   ssh nodename and there it does
> the mpirun. This does not work with your settings (probably because you
> do not load the modules in your .bashrc (or .cshrc), but only in your
> slurm-job and your system does not transfer the environment with ssh.
>
> The recommended option for mpi version 2 (all modern mpis) is to set
> MPI_REMOTE to zero. The mpirun command will be issued on the original
> node, but the lapw1_mpi executables will run as given in .machines.
>
> This should solve your problem.
>
> Am 29.05.2021 um 08:39 schrieb leila mollabashi:
> > Dear all wien2k users,
> > Following the previous comment referring me to the admin, I contacted
> > the cluster admin. By the comment of the admin, I recompiled Wien2k
> > successfully using the cluster modules.
> >>Once the blacs problem has been fixed,
> > For example, is the following correct?
> > libmkl_blacs_openmpi_lp64.so =>
> >
> /opt/exp_soft/local/generic/intel/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.so
>
> > (0x2b21efe03000)
> >>the next step is to run lapw0 in
> > sequential and parallel mode.
> >>Add:
> > x lapw0 and check the case.output0 and case.scf0 files (copy them to
> > a different name) as well as the message from the queuing system. ...
> > The “x lapw0” and “mpirun -np 4 $WIENROOT/lapw0_mpi lapw0.def” are
> > interactively executed correctly.
> > The “x lapw0 -p” is also correctly executed using the following
> > “.machines” file:
> > lapw0:e0017:4
> >>The same thing could be made with lapw1
> > The “x lapw1” and “mpirun -np 4 $WIENROOT/lapw1_mpi lapw1.def” are also
> > correctly executed interactively with no problem. But “x lapw1 -p” stops
> > when I use the following “.machines” file:
> > 1:e0017:2
> > 1:e0017:2
> > bash: mpirun: command not found
> > The output files are gathered into https://files.fm/u/7cssehdck
> > <https://files.fm/u/7cssehdck>.
> > Would you, please, help me to fix the parallel problem too?
> > Sincerely yours,
> > Leila
> >
> >
> > ___
> > Wien mailing list
> > Wien@zeus.theochem.tuwien.ac.at
> > http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> > SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
> >
>
> --
> --
> Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
> Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
> Email: bl...@theochem.tuwien.ac.atWIEN2k: http://www.wien2k.at
> WWW:   http://www.imc.tuwien.ac.at
> -
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] MPI Error

2021-05-30 Thread leila mollabashi
Dear all Wien2k users,
It is a pleasure to report that at last my problem is solved.
Here, I would like to express my gratitude to Peter Blaha, Laurence Marks,
Gavin Abo and Fecher Gerhard for all their very nice and valuable comments
and helpful links.
Sincerely yours,
Leila


On Sat, May 29, 2021 at 7:07 PM Peter Blaha 
wrote:

> The difference beteen lapw0para and lapw1para is that
> lapw0para always executes mpirun on the original node, lapw1para maybe not.
>
> The behavior of lapw1para depends on MPI_REMOTE (set in
> WIEN2k_parallel_options in w2k21.1 (or parallel_options earlier).
> With MPI_REMOTE=1 it will first issue a   ssh nodename and there it does
> the mpirun. This does not work with your settings (probably because you
> do not load the modules in your .bashrc (or .cshrc), but only in your
> slurm-job and your system does not transfer the environment with ssh.
>
> The recommended option for mpi version 2 (all modern mpis) is to set
> MPI_REMOTE to zero. The mpirun command will be issued on the original
> node, but the lapw1_mpi executables will run as given in .machines.
>
> This should solve your problem.
>
> Am 29.05.2021 um 08:39 schrieb leila mollabashi:
> > Dear all wien2k users,
> > Following the previous comment referring me to the admin, I contacted
> > the cluster admin. By the comment of the admin, I recompiled Wien2k
> > successfully using the cluster modules.
> >>Once the blacs problem has been fixed,
> > For example, is the following correct?
> > libmkl_blacs_openmpi_lp64.so =>
> >
> /opt/exp_soft/local/generic/intel/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.so
>
> > (0x2b21efe03000)
> >>the next step is to run lapw0 in
> > sequential and parallel mode.
> >>Add:
> > x lapw0 and check the case.output0 and case.scf0 files (copy them to
> > a different name) as well as the message from the queuing system. ...
> > The “x lapw0” and “mpirun -np 4 $WIENROOT/lapw0_mpi lapw0.def” are
> > interactively executed correctly.
> > The “x lapw0 -p” is also correctly executed using the following
> > “.machines” file:
> > lapw0:e0017:4
> >>The same thing could be made with lapw1
> > The “x lapw1” and “mpirun -np 4 $WIENROOT/lapw1_mpi lapw1.def” are also
> > correctly executed interactively with no problem. But “x lapw1 -p” stops
> > when I use the following “.machines” file:
> > 1:e0017:2
> > 1:e0017:2
> > bash: mpirun: command not found
> > The output files are gathered into https://files.fm/u/7cssehdck
> > <https://files.fm/u/7cssehdck>.
> > Would you, please, help me to fix the parallel problem too?
> > Sincerely yours,
> > Leila
> >
> >
> > ___
> > Wien mailing list
> > Wien@zeus.theochem.tuwien.ac.at
> > http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> > SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
> >
>
> --
> --
> Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
> Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
> Email: bl...@theochem.tuwien.ac.atWIEN2k: http://www.wien2k.at
> WWW:   http://www.imc.tuwien.ac.at
> -
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] MPI Error

2021-05-29 Thread leila mollabashi
Dear all wien2k users,
Following the previous comment referring me to the admin, I contacted the
cluster admin. By the comment of the admin, I recompiled Wien2k
successfully using the cluster modules.
>Once the blacs problem has been fixed,
For example, is the following correct?
libmkl_blacs_openmpi_lp64.so =>
/opt/exp_soft/local/generic/intel/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.so
(0x2b21efe03000)
>the next step is to run lapw0 in
sequential and parallel mode.
>Add:
x lapw0 and check the case.output0 and case.scf0 files (copy them to
a different name) as well as the message from the queuing system. ...
The “x lapw0” and “mpirun -np 4 $WIENROOT/lapw0_mpi lapw0.def” are
interactively executed correctly.
The “x lapw0 -p” is also correctly executed using the following “.machines”
file:
lapw0:e0017:4
>The same thing could be made with lapw1
The “x lapw1” and “mpirun -np 4 $WIENROOT/lapw1_mpi lapw1.def” are also
correctly executed interactively with no problem. But “x lapw1 -p” stops
when I use the following “.machines” file:
1:e0017:2
1:e0017:2
bash: mpirun: command not found
The output files are gathered into https://files.fm/u/7cssehdck.
Would you, please, help me to fix the parallel problem too?
Sincerely yours,
Leila
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] MPI error

2021-05-19 Thread leila mollabashi
Dear all wien2k users,
Thankyou for your reply and guides.
 > You need to link with the blacs library for openmpi.
I unsuccessfully recompiled wien2k by linking with the blacs library for
openmpias “mkl_blacs_openmpi_lp64” due to gfortran errors. The video of
this recompile is uploaded to a website which is available from:
*https://files.fm/u/zzwhjjj5q
* link. The SRC_lapw0/lapw1 compile.msg files
are uploaded to: *https://files.fm/u/zuwukxy8x
*,* https://files.fm/u/cep4pvnvd
*.
The openmpi and fftw of the cluster are compiled with gfortran. So I have
also installed openmpi 4.1.0 and fftw3.3.9 in my home directory after
loading ifort and icc with the following commands:
./configure--prefix=/home/users/mollabashi/expands/openmpi CC=icc F77=ifort
FC=ifort--with-slurm --with-pmix --enable-shared --with-hwloc=internal
./configure--prefix=/home/users/mollabashi/expands/fftw MPICC=mpicc CC=icc
F77=ifort --enable-mpi --enable-openmp  --enable-shared
By this way, Wien2k compiled correctly as shown in the video in
*https://files.fm/u/rk3vfqv5g
* link. But mpi run does not run due to the
error about openmpi as in: *https://files.fm/u/tcz2fvwpg
*. The .bashrc, submit.sh and slurm.out
filesare in *https://files.fm/u/dnrrwqguy *
link.
Would you please guide me how to solve the gfortran errors?
Should I install openmpi with another configuration to solve the slurm
error of mpi calculation?
Sincerely yours,
Leila

On Thu, May 6, 2021 at 9:44 PM Laurence Marks 
wrote:

> Peter beat me to the response -- please do as he says and move stepwise
> forward, posting single steps if they fail.
>
> On Thu, May 6, 2021 at 10:38 AM Peter Blaha 
> wrote:
>
>> Once the blacs problem has been fixed, the next step is to run lapw0 in
>> sequential and parallel mode.
>>
>> Add:
>>
>> x lapw0 and check the case.output0 and case.scf0 files (copy them to
>> a different name) as well as the message from the queuing system.
>>
>> add:   mpirun -np 4 $WIENROOT/lapw0_mpi lapw0.def
>> and check the messages and compare the results with the previous
>> sequential run.
>>
>> And finally:
>> create a .machines file with:
>> lapw0:localhost:4
>>
>> and execute
>> x lapw0 -p
>>
>> -
>> The same thing could be made with lapw1
>>
>>
>> --
> Professor Laurence Marks
> Department of Materials Science and Engineering
> Northwestern University
> www.numis.northwestern.edu
> "Research is to see what everybody else has seen, and to think what nobody
> else has thought" Albert Szent-Györgyi
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] MPI error

2021-05-06 Thread leila mollabashi
l-archive.com/wien@zeus.theochem.tuwien.ac.at/msg17052.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!C5RwGaeHepJtMl42xVZlEAUWtk2DM4zXGp3jfTPI5NJGAplXozUMwOc-7I4TfdraaFQRnw$>
>> [6]
>> https://stackoverflow.com/questions/10056898/how-do-you-check-the-version-of-openmpi
>> <https://urldefense.com/v3/__https://stackoverflow.com/questions/10056898/how-do-you-check-the-version-of-openmpi__;!!Dq0X2DkFhyF93HkjWTBQKhk!C5RwGaeHepJtMl42xVZlEAUWtk2DM4zXGp3jfTPI5NJGAplXozUMwOc-7I4Tfdph7KJdew$>
>> [7] https://www.open-mpi.org/faq/?category=mpi-apps#general-build
>> <https://urldefense.com/v3/__https://www.open-mpi.org/faq/?category=mpi-apps*general-build__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!C5RwGaeHepJtMl42xVZlEAUWtk2DM4zXGp3jfTPI5NJGAplXozUMwOc-7I4TfdosQiVCzQ$>
>>
>> 3) Those cluster administrators are usually more savvy than I am with
>> installation and optimization of software (using compiler documentation,
>> e.g. [8,9]) on a high performance computing (hpc) supercomputer [10,11].
>> They would know your situation better.  For example, they could login to
>> their administrator account on the cluster to install WIEN2k only in your
>> user account directory (/home/users/mollabashi), and they would know how to
>> set the appropriate access permissions [12].  Alternatively, if your not
>> using a personal laptop but a computer at the organization to remotely
>> connect to the cluster then they might use remote desktop access [13] to
>> help you with the installation within only your account.  Or they might use
>> another method.
>> [8]
>> https://software.intel.com/content/www/us/en/develop/articles/download-documentation-intel-compiler-current-and-previous.html
>> <https://urldefense.com/v3/__https://software.intel.com/content/www/us/en/develop/articles/download-documentation-intel-compiler-current-and-previous.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!C5RwGaeHepJtMl42xVZlEAUWtk2DM4zXGp3jfTPI5NJGAplXozUMwOc-7I4TfdqQX1Yugg$>
>> [9] https://gcc.gnu.org/onlinedocs/
>> <https://urldefense.com/v3/__https://gcc.gnu.org/onlinedocs/__;!!Dq0X2DkFhyF93HkjWTBQKhk!C5RwGaeHepJtMl42xVZlEAUWtk2DM4zXGp3jfTPI5NJGAplXozUMwOc-7I4TfdobQb5a1g$>
>> [10]
>> https://www.usgs.gov/core-science-systems/sas/arc/about/what-high-performance-computing
>> <https://urldefense.com/v3/__https://www.usgs.gov/core-science-systems/sas/arc/about/what-high-performance-computing__;!!Dq0X2DkFhyF93HkjWTBQKhk!C5RwGaeHepJtMl42xVZlEAUWtk2DM4zXGp3jfTPI5NJGAplXozUMwOc-7I4TfdqNzk9j6g$>
>> [11] https://en.wikipedia.org/wiki/Supercomputer
>> <https://urldefense.com/v3/__https://en.wikipedia.org/wiki/Supercomputer__;!!Dq0X2DkFhyF93HkjWTBQKhk!C5RwGaeHepJtMl42xVZlEAUWtk2DM4zXGp3jfTPI5NJGAplXozUMwOc-7I4Tfdr6pni8Lw$>
>> [12]
>> https://www.oreilly.com/library/view/running-linux-third/156592469X/ch04s14.html
>> <https://urldefense.com/v3/__https://www.oreilly.com/library/view/running-linux-third/156592469X/ch04s14.html__;!!Dq0X2DkFhyF93HkjWTBQKhk!C5RwGaeHepJtMl42xVZlEAUWtk2DM4zXGp3jfTPI5NJGAplXozUMwOc-7I4TfdroF-ZgDw$>
>> [13] https://en.wikipedia.org/wiki/Desktop_sharing
>> <https://urldefense.com/v3/__https://en.wikipedia.org/wiki/Desktop_sharing__;!!Dq0X2DkFhyF93HkjWTBQKhk!C5RwGaeHepJtMl42xVZlEAUWtk2DM4zXGp3jfTPI5NJGAplXozUMwOc-7I4Tfdr-07_1qQ$>
>>
>> On 5/4/2021 3:40 PM, Laurence Marks wrote:
>>
>> For certain, "/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin/mpiexec
>> /home/users/mollabashi/codes/v21.1/run_lapw -p" is completely wrong. You
>> do not, repear do not use mpirun or mpiexec to start run_lapw. It has to be
>> started by simply "run_lapw -p ..." by itself.
>>
>> I suggest that you create a very simple job which has the commands:
>>
>> which mpirun
>> which lapw1_mpi
>> echo $WIENROOT
>> ldd $WIENROOT/lapw1_mpi
>> ldd $WIENROOT/lapw1
>> echo env
>> echo $PATH
>>
>> Run this interactively as well as in a batch job and compare. You will
>> find that there are something which are not present when you are launching
>> your slurm job that are present interactively. You need to repair these
>> with relevant PATH/LD_LIBRARY_PATH etc
>>
>> Your problems are not Wien2k problems, they are due to incorrect
>> modules/script/environment or similar. Have you asked your sysadmin for
>> help? I am certain that someone local who is experienced with standard
>> linux can tell you very quickly what to do.
>>
>> N.B., there is an error in your path setting.
>>
>> On Tue, May 4, 2021 at 3:38 PM leila mollabashi 
>> wrote:
>>
>>> Dear all WIEN2k users,
>>> Thank you for your guides.
>>> >take care on the correct loc

Re: [Wien] MPI error

2021-05-04 Thread leila mollabashi
Dear all WIEN2k users,
Thank you for your guides.
>take care on the correct location ...
It is the /usr/share/Modules/init
After adding the “source /usr/share/Modules/init/tcsh” line in to the
script the same error appeared:
mpirun: command not found

In fact, with and without “source /usr/share/Modules/init/tcsh” it is
written in slurm.out file that “ module load complete ”.

I noticed that “export” is also the bash command so I used these commands
to path the openmpi and fftw:
setenv LD_LIBRARY_PATH
{$LD_LIBRARY_PATH}:/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/lib:/opt/exp_soft/local/generic/fftw/3.3.8_gcc620/lib
set path = ($path
/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin:/opt/exp_soft/local/generic/fftw/3.3.8_gcc620/bin)
But result is the same:
bash: mpirun: command not found

By using this line in the script:
/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin/mpiexec
/home/users/mollabashi/codes/v21.1/run_lapw -p
The calculation stopped with the following error:
mpirun does not support recursive calls

> I wonder that you have only single modules…
There are different versions of ifort and mkl: ifort/15.0.0, ifort/15.0.3,
ifort/17.0.1, ifort/19.1.3.304(default) mkl/11.2, mkl/11.2.3.187
mkl/2017.1.132,
mkl/2019.2.187, mkl/2020.0.4(default). I used the defaults
> you may also wish to make a single module file to be loaded…
That is a good idea.
> On our cluster we have different W2k modules ….
As you know WIEN2k is not a free code and the users of the cluster that I
am using are not registered WIEN2k users. Thus, according to my moral
commitment to the WIEN2k developers, I cannot ask the administrator to
install it on the cluster. I should install it on my user account.

Sincerely yours,
Leila
>PS.: maybe one should mention this tcsh "problem" in the slurm.job example
on the FAQ page by adding (or similar)…
That is a good idea. Thank you for your suggestion.

On Mon, May 3, 2021 at 4:18 PM Fecher, Gerhard  wrote:

> Dear Leila
> In your first mail you mentioned that you use
>  slurm.job
> with the added lines
>module load openmpi/4.1.0_gcc620
>module load ifort
>module load mkl
>
> Sorry that my Sunday evening answer was too short, here a little more
> detail:
> I guess your login shell is bash, and you run the command
>  sbatch slurm.job
> that is written for tcsh, but tcsh does not know where the module command
> is, therfore  the job file should tell it where it is,
> e.g.: the beginning of slurm.job should look like
>
> #!/bin/tcsh
> #
> # Load the respective software module you intend to use, here for tcsh
> shell
> # NOTE: you may need to edit the source line !
> source /usr/share/lmod/lmod/init/tcsh
> module load openmpi/4.1.0_gcc620
> module load ifort
> module load mkl
>
> take care on the correct location it may be in: /usr/share/modules/init/csh
> if you do not find its correct location then ask your administrator
>
> I wonder that you have only single modules for ifort and mkl and not
> different version,
> I guess that are defaults, but which ? ask your administrator;
> you may also wish to make a single module file to be loaded, and
> you may also whish to send the output to the data nirvana by using >&
> /dev/null
> in that case you may have only the lines (as an example)
> source /usr/share/lmod/lmod/init/tcsh
> module load Wien2k/wien2k_21_intel19 >& /dev/null
> echo -n "Running Wien2k" $WienVersion
>
>
> PS.: maybe one should mention this tcsh "problem"  in the slurm.job
> example on the FAQ page by adding (or similar)
>   #  NOTE: you may need to edit the following line !
>   #  source /usr/share/lmod/lmod/init/tcsh
> as modules are frequently used on clusters and allow easily to change
> between different versions.
> On our cluster we have different W2k modules that have been compiled with
> different libraries, compilers, and/or settings.
>
> PSS.: I am not aware of typos ;-)
>
> Ciao
> Gerhard
>
> DEEP THOUGHT in D. Adams; Hitchhikers Guide to the Galaxy:
> "I think the problem, to be quite honest with you,
> is that you have never actually known what the question is."
>
> ====
> Dr. Gerhard H. Fecher
> Institut of Physics
> Johannes Gutenberg - University
> 55099 Mainz
> 
> Von: Wien [wien-boun...@zeus.theochem.tuwien.ac.at] im Auftrag von leila
> mollabashi [le.mollaba...@gmail.com]
> Gesendet: Montag, 3. Mai 2021 00:35
> An: A Mailing list for WIEN2k users
> Betreff: Re: [Wien] MPI error
>
> Thank you.
>
> On Mon, May 3, 2021, 3:04 AM Laurence Marks  <mailto:laurence.ma...@gmail.com>> wrote:
> You have to solve the "mpirun not found". That is due to your
>

Re: [Wien] MPI error

2021-05-02 Thread leila mollabashi
Thank you.

On Mon, May 3, 2021, 3:04 AM Laurence Marks 
wrote:

> You have to solve the "mpirun not found". That is due to your
> path/nfs/module -- we do not know.
>
> ---
> Prof Laurence Marks
> "Research is to see what everyone else has seen, and to think what nobody
> else has thought", Albert Szent-Györgyi
> www.numis.northwestern.edu
>
> On Sun, May 2, 2021, 17:12 leila mollabashi 
> wrote:
>
>> >You have an error in the LD_LIBRARY_PATH def you sent -- it needs to be
>> "...:$LD_LIB..."
>>
>> Thank you. I have corrected it but I still have error in x lapw1 “mpirun:
>> command not found”
>>
>> >Why not load the modules in the script to run a job? I have loaded but
>> this error happened “bash: mpirun: command not found”.
>>
>> On Mon, May 3, 2021 at 2:23 AM Laurence Marks 
>> wrote:
>>
>>> You have an error in the LD_LIBRARY_PATH def you sent -- it needs to be
>>> "...:$LD_LIB...".
>>>
>>> Why not load the modules in the script to run a job?
>>>
>>> ---
>>> Prof Laurence Marks
>>> "Research is to see what everyone else has seen, and to think what
>>> nobody else has thought", Albert Szent-Györgyi
>>> www.numis.northwestern.edu
>>>
>>> On Sun, May 2, 2021, 16:35 leila mollabashi 
>>> wrote:
>>>
>>>> Dear all WIEN2k users,
>>>>
>>>> Thank you for your reply.
>>>>
>>>> >The error is exactly what it says -- mpirun not found. This has
>>>> something to do with the modules, almost certainly the openmpi one. You
>>>> need to find where mpirun is on your system, and ensure that it is in your
>>>> PATH. This is an issue with your OS, not Wien2k. However...
>>>>
>>>> which mpirun:
>>>>
>>>> /opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin/mpirun
>>>>
>>>> I have installed WIEN2k by loading ifort, mkl, openmpi/4.1.0_gcc620,
>>>>  fftw/3.3.8_gcc620 modules. when I added the path in my .bashrc file as
>>>> followes:
>>>>
>>>> export
>>>> LD_LIBRARY_PATH=/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/lib:/opt/exp_soft/local/generic/fftw/3.3.8_gcc620/lib:LD_LIBRARY_PATH
>>>>
>>>> export
>>>> PATH=/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin:/opt/exp_soft/local/generic/fftw/3.3.8_gcc620/bin:$PATH
>>>>
>>>> wien2k does not run:
>>>>
>>>> error while loading shared libraries: libiomp5.so: cannot open shared
>>>> object file: No such file or directory
>>>>
>>>> 0.000u 0.000s 0:00.00 0.0%  0+0k 0+0io 0pf+0w
>>>>
>>>> but without the path and by loading the modules it runs.
>>>>
>>>> > First do "x lapw0 -p", send the .machines file and the last few
>>>> lines of your *.output0*. Then we can confirm if that worked right, did not
>>>> or what.
>>>>
>>>> .machines:
>>>>
>>>> lapw0:e0183:4
>>>>
>>>> 1:e0183:4
>>>>
>>>> 1:e0183:4
>>>>
>>>> Almost end of *output:
>>>>
>>>> TOTAL VALUE = -10433.492442 (H)
>>>>
>>>> :DEN  : DENSITY INTEGRAL  =-20866.98488444   (Ry)
>>>>
>>>> Almost end of *output0001
>>>>
>>>> TOTAL VALUE = -10433.492442 (H)
>>>>
>>>> >Assuming that you used gcc
>>>>
>>>> Yes.
>>>>
>>>> >For certain you cannot run lapw2 without first running lapw1.
>>>>
>>>> Yes. You are right. When x lapw1 –p has not executed I have changed the
>>>> .machines file and run in kpoint parallel mode then changed the .machines
>>>> file again and run lapw2 –p.
>>>>
>>>> >How? Do you mean that there are no error messages?
>>>>
>>>> Yes and I also checked compile.msg in SRC_lapw1
>>>>
>>>> Sincerely yours,
>>>>
>>>> Leila
>>>>
>>>>
>>>> On Mon, May 3, 2021 at 12:42 AM Fecher, Gerhard 
>>>> wrote:
>>>>
>>>>> I guess that module does not work with tcsh
>>>>>
>>>>> Ciao
>>>>> Gerhard
>>>>>
>>>>> DEEP THOUGHT in D. Adams; Hitchhikers Guide to the Galaxy:
>>>>> "I think the problem, t

Re: [Wien] MPI error

2021-05-02 Thread leila mollabashi
>You have an error in the LD_LIBRARY_PATH def you sent -- it needs to be
"...:$LD_LIB..."

Thank you. I have corrected it but I still have error in x lapw1 “mpirun:
command not found”

>Why not load the modules in the script to run a job? I have loaded but
this error happened “bash: mpirun: command not found”.

On Mon, May 3, 2021 at 2:23 AM Laurence Marks 
wrote:

> You have an error in the LD_LIBRARY_PATH def you sent -- it needs to be
> "...:$LD_LIB...".
>
> Why not load the modules in the script to run a job?
>
> ---
> Prof Laurence Marks
> "Research is to see what everyone else has seen, and to think what nobody
> else has thought", Albert Szent-Györgyi
> www.numis.northwestern.edu
>
> On Sun, May 2, 2021, 16:35 leila mollabashi 
> wrote:
>
>> Dear all WIEN2k users,
>>
>> Thank you for your reply.
>>
>> >The error is exactly what it says -- mpirun not found. This has
>> something to do with the modules, almost certainly the openmpi one. You
>> need to find where mpirun is on your system, and ensure that it is in your
>> PATH. This is an issue with your OS, not Wien2k. However...
>>
>> which mpirun:
>>
>> /opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin/mpirun
>>
>> I have installed WIEN2k by loading ifort, mkl, openmpi/4.1.0_gcc620,
>>  fftw/3.3.8_gcc620 modules. when I added the path in my .bashrc file as
>> followes:
>>
>> export
>> LD_LIBRARY_PATH=/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/lib:/opt/exp_soft/local/generic/fftw/3.3.8_gcc620/lib:LD_LIBRARY_PATH
>>
>> export
>> PATH=/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin:/opt/exp_soft/local/generic/fftw/3.3.8_gcc620/bin:$PATH
>>
>> wien2k does not run:
>>
>> error while loading shared libraries: libiomp5.so: cannot open shared
>> object file: No such file or directory
>>
>> 0.000u 0.000s 0:00.00 0.0%  0+0k 0+0io 0pf+0w
>>
>> but without the path and by loading the modules it runs.
>>
>> > First do "x lapw0 -p", send the .machines file and the last few lines
>> of your *.output0*. Then we can confirm if that worked right, did not or
>> what.
>>
>> .machines:
>>
>> lapw0:e0183:4
>>
>> 1:e0183:4
>>
>> 1:e0183:4
>>
>> Almost end of *output:
>>
>> TOTAL VALUE = -10433.492442 (H)
>>
>> :DEN  : DENSITY INTEGRAL  =-20866.98488444   (Ry)
>>
>> Almost end of *output0001
>>
>> TOTAL VALUE = -10433.492442 (H)
>>
>> >Assuming that you used gcc
>>
>> Yes.
>>
>> >For certain you cannot run lapw2 without first running lapw1.
>>
>> Yes. You are right. When x lapw1 –p has not executed I have changed the
>> .machines file and run in kpoint parallel mode then changed the .machines
>> file again and run lapw2 –p.
>>
>> >How? Do you mean that there are no error messages?
>>
>> Yes and I also checked compile.msg in SRC_lapw1
>>
>> Sincerely yours,
>>
>> Leila
>>
>>
>> On Mon, May 3, 2021 at 12:42 AM Fecher, Gerhard 
>> wrote:
>>
>>> I guess that module does not work with tcsh
>>>
>>> Ciao
>>> Gerhard
>>>
>>> DEEP THOUGHT in D. Adams; Hitchhikers Guide to the Galaxy:
>>> "I think the problem, to be quite honest with you,
>>> is that you have never actually known what the question is."
>>>
>>> 
>>> Dr. Gerhard H. Fecher
>>> Institut of Physics
>>> Johannes Gutenberg - University
>>> 55099 Mainz
>>> 
>>> Von: Wien [wien-boun...@zeus.theochem.tuwien.ac.at] im Auftrag von
>>> Laurence Marks [laurence.ma...@gmail.com]
>>> Gesendet: Sonntag, 2. Mai 2021 21:32
>>> An: A Mailing list for WIEN2k users
>>> Betreff: Re: [Wien] MPI error
>>>
>>> Inlined response and questions
>>>
>>> On Sun, May 2, 2021 at 2:19 PM leila mollabashi >> <mailto:le.mollaba...@gmail.com>> wrote:
>>> Dear Prof. Peter Blaha and WIEN2k users,
>>> Now I have loaded the openmpi/4.1.0 and compiled Wine2k. The admin told
>>> me that I can use your script in >
>>> http://www.wien2k.at/reg_user/faq/slurm.job
>>> <https://urldefense.com/v3/__http://www.wien2k.at/reg_user/faq/slurm.job__;!!Dq0X2DkFhyF93HkjWTBQKhk!A4zeMc6H184Nsbinv0lWLQyzxpdvRUetaqlHDTUV8sC-k8WlE7z_qcoC_7AzO5s6X8cPOw$>
>>> <
>>> https://urldefense.com/v3/__http://www

Re: [Wien] MPI error

2021-05-02 Thread leila mollabashi
Dear all

The admin told me that I can use this line in the script
“/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin/mpiexec
/home/users/SOME_USER/app/my_mpi_app -o option 1 -in
/home/users/SOME_USER/path/to/input1  -o ./output1”

Would you please guide me about this line? Should I use “run_lapw -p”
insead of my_mpi_app? I don’t know what should I do instead of input1
and output1

Sincerely yours,

Leila


On Mon, May 3, 2021 at 2:04 AM leila mollabashi 
wrote:

> Dear all WIEN2k users,
>
> Thank you for your reply.
>
> >The error is exactly what it says -- mpirun not found. This has something
> to do with the modules, almost certainly the openmpi one. You need to find
> where mpirun is on your system, and ensure that it is in your PATH. This is
> an issue with your OS, not Wien2k. However...
>
> which mpirun:
>
> /opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin/mpirun
>
> I have installed WIEN2k by loading ifort, mkl, openmpi/4.1.0_gcc620,
>  fftw/3.3.8_gcc620 modules. when I added the path in my .bashrc file as
> followes:
>
> export
> LD_LIBRARY_PATH=/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/lib:/opt/exp_soft/local/generic/fftw/3.3.8_gcc620/lib:LD_LIBRARY_PATH
>
> export
> PATH=/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin:/opt/exp_soft/local/generic/fftw/3.3.8_gcc620/bin:$PATH
>
> wien2k does not run:
>
> error while loading shared libraries: libiomp5.so: cannot open shared
> object file: No such file or directory
>
> 0.000u 0.000s 0:00.00 0.0%  0+0k 0+0io 0pf+0w
>
> but without the path and by loading the modules it runs.
>
> > First do "x lapw0 -p", send the .machines file and the last few lines
> of your *.output0*. Then we can confirm if that worked right, did not or
> what.
>
> .machines:
>
> lapw0:e0183:4
>
> 1:e0183:4
>
> 1:e0183:4
>
> Almost end of *output:
>
> TOTAL VALUE = -10433.492442 (H)
>
> :DEN  : DENSITY INTEGRAL  =-20866.98488444   (Ry)
>
> Almost end of *output0001
>
> TOTAL VALUE = -10433.492442 (H)
>
> >Assuming that you used gcc
>
> Yes.
>
> >For certain you cannot run lapw2 without first running lapw1.
>
> Yes. You are right. When x lapw1 –p has not executed I have changed the
> .machines file and run in kpoint parallel mode then changed the .machines
> file again and run lapw2 –p.
>
> >How? Do you mean that there are no error messages?
>
> Yes and I also checked compile.msg in SRC_lapw1
>
> Sincerely yours,
>
> Leila
>
>
> On Mon, May 3, 2021 at 12:42 AM Fecher, Gerhard 
> wrote:
>
>> I guess that module does not work with tcsh
>>
>> Ciao
>> Gerhard
>>
>> DEEP THOUGHT in D. Adams; Hitchhikers Guide to the Galaxy:
>> "I think the problem, to be quite honest with you,
>> is that you have never actually known what the question is."
>>
>> 
>> Dr. Gerhard H. Fecher
>> Institut of Physics
>> Johannes Gutenberg - University
>> 55099 Mainz
>> ____
>> Von: Wien [wien-boun...@zeus.theochem.tuwien.ac.at] im Auftrag von
>> Laurence Marks [laurence.ma...@gmail.com]
>> Gesendet: Sonntag, 2. Mai 2021 21:32
>> An: A Mailing list for WIEN2k users
>> Betreff: Re: [Wien] MPI error
>>
>> Inlined response and questions
>>
>> On Sun, May 2, 2021 at 2:19 PM leila mollabashi > <mailto:le.mollaba...@gmail.com>> wrote:
>> Dear Prof. Peter Blaha and WIEN2k users,
>> Now I have loaded the openmpi/4.1.0 and compiled Wine2k. The admin told
>> me that I can use your script in >
>> http://www.wien2k.at/reg_user/faq/slurm.job<
>> https://urldefense.com/v3/__http://www.wien2k.at/reg_user/faq/slurm.job__;!!Dq0X2DkFhyF93HkjWTBQKhk!G_67ZheBzKx4rn9SJ-7AOPNV2M9DFC6mHQ4b1S_sPZITO1RwQsLYLGNWwENJJwPKlowiXQ$>
>> . I added this lines to it too:
>> module load openmpi/4.1.0_gcc620
>> module load ifort
>> module load mkl
>> but this error happened “bash: mpirun: command not found”.
>> The error is exactly what it says -- mpirun not found. This has something
>> to do with the modules, almost certainly the openmpi one. You need to find
>> where mpirun is on your system, and ensure that it is in your PATH. This is
>> an issue with your OS, not Wien2k. However...
>>
>> In an interactive mode “x lapw0 –p” and “x lapw2 –p” are executed MPI but
>> “x lapw1 –p” is stoped with following error:
>> w2k_dispatch_signal(): received: Segmentation fault
>> Is this mpi mode? None of lapw0/1/2 can work in true parallel without
>> mpirun, so there is something major wrong here. I 

Re: [Wien] MPI error

2021-05-02 Thread leila mollabashi
Dear all WIEN2k users,

Thank you for your reply.

>The error is exactly what it says -- mpirun not found. This has something
to do with the modules, almost certainly the openmpi one. You need to find
where mpirun is on your system, and ensure that it is in your PATH. This is
an issue with your OS, not Wien2k. However...

which mpirun:

/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin/mpirun

I have installed WIEN2k by loading ifort, mkl, openmpi/4.1.0_gcc620,
 fftw/3.3.8_gcc620 modules. when I added the path in my .bashrc file as
followes:

export
LD_LIBRARY_PATH=/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/lib:/opt/exp_soft/local/generic/fftw/3.3.8_gcc620/lib:LD_LIBRARY_PATH

export
PATH=/opt/exp_soft/local/generic/openmpi/4.1.0_gcc620/bin:/opt/exp_soft/local/generic/fftw/3.3.8_gcc620/bin:$PATH

wien2k does not run:

error while loading shared libraries: libiomp5.so: cannot open shared
object file: No such file or directory

0.000u 0.000s 0:00.00 0.0%  0+0k 0+0io 0pf+0w

but without the path and by loading the modules it runs.

> First do "x lapw0 -p", send the .machines file and the last few lines of
your *.output0*. Then we can confirm if that worked right, did not or what.

.machines:

lapw0:e0183:4

1:e0183:4

1:e0183:4

Almost end of *output:

TOTAL VALUE = -10433.492442 (H)

:DEN  : DENSITY INTEGRAL  =-20866.98488444   (Ry)

Almost end of *output0001

TOTAL VALUE = -10433.492442 (H)

>Assuming that you used gcc

Yes.

>For certain you cannot run lapw2 without first running lapw1.

Yes. You are right. When x lapw1 –p has not executed I have changed the
.machines file and run in kpoint parallel mode then changed the .machines
file again and run lapw2 –p.

>How? Do you mean that there are no error messages?

Yes and I also checked compile.msg in SRC_lapw1

Sincerely yours,

Leila


On Mon, May 3, 2021 at 12:42 AM Fecher, Gerhard  wrote:

> I guess that module does not work with tcsh
>
> Ciao
> Gerhard
>
> DEEP THOUGHT in D. Adams; Hitchhikers Guide to the Galaxy:
> "I think the problem, to be quite honest with you,
> is that you have never actually known what the question is."
>
> 
> Dr. Gerhard H. Fecher
> Institut of Physics
> Johannes Gutenberg - University
> 55099 Mainz
> 
> Von: Wien [wien-boun...@zeus.theochem.tuwien.ac.at] im Auftrag von
> Laurence Marks [laurence.ma...@gmail.com]
> Gesendet: Sonntag, 2. Mai 2021 21:32
> An: A Mailing list for WIEN2k users
> Betreff: Re: [Wien] MPI error
>
> Inlined response and questions
>
> On Sun, May 2, 2021 at 2:19 PM leila mollabashi  <mailto:le.mollaba...@gmail.com>> wrote:
> Dear Prof. Peter Blaha and WIEN2k users,
> Now I have loaded the openmpi/4.1.0 and compiled Wine2k. The admin told me
> that I can use your script in >http://www.wien2k.at/reg_user/faq/slurm.job
> <
> https://urldefense.com/v3/__http://www.wien2k.at/reg_user/faq/slurm.job__;!!Dq0X2DkFhyF93HkjWTBQKhk!G_67ZheBzKx4rn9SJ-7AOPNV2M9DFC6mHQ4b1S_sPZITO1RwQsLYLGNWwENJJwPKlowiXQ$>
> . I added this lines to it too:
> module load openmpi/4.1.0_gcc620
> module load ifort
> module load mkl
> but this error happened “bash: mpirun: command not found”.
> The error is exactly what it says -- mpirun not found. This has something
> to do with the modules, almost certainly the openmpi one. You need to find
> where mpirun is on your system, and ensure that it is in your PATH. This is
> an issue with your OS, not Wien2k. However...
>
> In an interactive mode “x lapw0 –p” and “x lapw2 –p” are executed MPI but
> “x lapw1 –p” is stoped with following error:
> w2k_dispatch_signal(): received: Segmentation fault
> Is this mpi mode? None of lapw0/1/2 can work in true parallel without
> mpirun, so there is something major wrong here. I doubt that anything
> really executed properly. For certain you cannot run lapw2 without first
> running lapw1. What is your .machines file? what is the content of the
> error files? (cat *.error).
>
> First do "x lapw0 -p", send the .machines file and the last few lines of
> your *.output0*. Then we can confirm if that worked right, did not or what.
>
> --
> I noticed that the FFTW3 and OpenMPI installed on the cluster are both
> compiled by gfortan. But I have compiled WIEN2k by intel ifort. I am not
> sure whether the problem originates from this inconsistency between gfortan
> and ifort.
> Almost everything in FFTW3 and OpenMPI is in fact c. Assuming that you
> used gcc there should be no problem. In general there should be no problem.
>
> I have checked that lapw1 has compiled correctly.
> How? Do you mean that there are no error messages?
>
>
>

Re: [Wien] MPI error

2021-05-02 Thread leila mollabashi
Dear Prof. Peter Blaha and WIEN2k users,

Now I have loaded the openmpi/4.1.0 and compiled Wine2k. The admin told me
that I can use your script in >http://www.wien2k.at/reg_user/faq/slurm.job
. I added this lines to it too:

module load openmpi/4.1.0_gcc620

module load ifort

module load mkl

but this error happened “bash: mpirun: command not found”.

In an interactive mode “x lapw0 –p” and “x lapw2 –p” are executed MPI but “x
lapw1 –p” is stoped with following error:

w2k_dispatch_signal(): received: Segmentation fault

--

I noticed that the FFTW3 and OpenMPI installed on the cluster are both
compiled by gfortan. But I have compiled WIEN2k by intel ifort. I am not
sure whether the problem originates from this inconsistency between gfortan
and ifort.

I have checked that lapw1 has compiled correctly.

Sincerely yours,

Leila



On Fri, Apr 23, 2021 at 7:26 PM Peter Blaha 
wrote:

> Recompile with LI, since mpirun is supported (after loading the proper
> mpi).
>
> PS: Ask them if -np and -machinefile is still possible to use. Otherwise
> you cannot mix k-parallel and mpi parallel and for sure, for smaller
> cases it is a severe limitation to have only ONE mpi job with many
> k-points, small matrix size and many mpi cores.
>
> Am 23.04.2021 um 16:04 schrieb leila mollabashi:
> > Dear Prof. Peter Blaha and WIEN2k users,
> >
> > Thank you for your assistances.
> >
> > Here it is the admin reply:
> >
> >   * mpirun/mpiexec command is supported after loadin propper module ( I
> > suggest openmpi/4.1.0 with gcc 6.2.0 or icc )
> >   * you have to describe needed resources (I suggest : --nodes and
> > --ntasks-per-node , please use "whole node" , so ntasks-pper-node=
> > 28 or 32 or 48 , depending of partition)
> >   * Yes, our cluster have "tight integration with mpi" but the
> > other-way-arround : our MPI libraries are compiled with SLURM
> > support, so when you describe resources at the beginning of batch
> > script, you do not have to use "-np" and "-machinefile" options for
> > mpirun/mpiexec
> >
> >   * this error message " btl_openib_component.c:1699:init_one_device" is
> > caused by "old" mpi library, so please recompile your application
> > (WIEN2k) using openmpi/4.1.0_icc19
> >
> > Now should I compile WIEN2k with SL or LI?
> >
> > Sincerely yours,
> >
> > Leila Mollabashi
> >
> >
> > On Wed, Apr 14, 2021 at 10:34 AM Peter Blaha
> > mailto:pbl...@theochem.tuwien.ac.at>>
> wrote:
> >
> > It cannot initialize an mpi job, because it is missing the interface
> > software.
> >
> > You need to ask the computing center / system administrators how one
> > executes a mpi job on this computer.
> >
> > It could be, that "mpirun" is not supported on this machine. You may
> > try
> > a wien2k installation with  system   "LS"  in siteconfig. This will
> > configure the parallel environment/commands using "slurm" commands
> like
> > srun -K -N_nodes_ -n_NP_  ..., replacing mpirun.
> > We used it once on our hpc machine, since it was recommended by the
> > computing center people. However, it turned out that the standard
> > mpirun
> > installation was more stable because the "slurm controller" died too
> > often leading to many random crashes. Anyway, if your system has
> > what is
> > called "tight integration of mpi", it might be necessary.
> >
> > Am 13.04.2021 um 21:47 schrieb leila mollabashi:
> >  > Dear Prof. Peter Blaha and WIEN2k users,
> >  >
> >  > Then by run x lapw1 –p:
> >  >
> >  > starting parallel lapw1 at Tue Apr 13 21:04:15 CEST 2021
> >  >
> >  > ->  starting parallel LAPW1 jobs at Tue Apr 13 21:04:15 CEST 2021
> >  >
> >  > running LAPW1 in parallel mode (using .machines)
> >  >
> >  > 2 number_of_parallel_jobs
> >  >
> >  > [1] 14530
> >  >
> >  > [e0467:14538] mca_base_component_repository_open: unable to open
> >  > mca_btl_uct: libucp.so.0: cannot open shared object file: No such
> > file
> >  > or directory (ignored)
> >  >
> >  > WARNING: There was an error initializing an OpenFabrics device.
> >  >
> >  >Local host:   e0467
> >  >
> >  >Local device: 

Re: [Wien] MPI error

2021-05-02 Thread leila mollabashi
Dear Prof. Peter Blaha and WIEN2k users,

Now I have loaded the openmpi/4.1.0 and compiled Wine2k. The admin told me
that I can use your script in >http://www.wien2k.at/reg_user/faq/slurm.job
. I added this lines to it too:

module load openmpi/4.1.0_gcc620

module load ifort

module load mkl

but this error happened “bash: mpirun: command not found”.

In an interactive mode “x lapw0 –p” and “x lapw2 –p” are executed MPI but “x
lapw1 –p” is stoped with following error:

w2k_dispatch_signal(): received: Segmentation fault

--

I noticed that the FFTW3 and OpenMPI installed on the cluster are both
compiled by gfortan. But I have compiled WIEN2k by intel ifort. I am not
sure whether the problem originates from this inconsistency between gfortan
and ifort.

I have checked that lapw1 has compiled correctly.

Sincerely yours,

Leila


On Fri, Apr 23, 2021 at 7:26 PM Peter Blaha 
wrote:

> Recompile with LI, since mpirun is supported (after loading the proper
> mpi).
>
> PS: Ask them if -np and -machinefile is still possible to use. Otherwise
> you cannot mix k-parallel and mpi parallel and for sure, for smaller
> cases it is a severe limitation to have only ONE mpi job with many
> k-points, small matrix size and many mpi cores.
>
> Am 23.04.2021 um 16:04 schrieb leila mollabashi:
> > Dear Prof. Peter Blaha and WIEN2k users,
> >
> > Thank you for your assistances.
> >
> > Here it is the admin reply:
> >
> >   * mpirun/mpiexec command is supported after loadin propper module ( I
> > suggest openmpi/4.1.0 with gcc 6.2.0 or icc )
> >   * you have to describe needed resources (I suggest : --nodes and
> > --ntasks-per-node , please use "whole node" , so ntasks-pper-node=
> > 28 or 32 or 48 , depending of partition)
> >   * Yes, our cluster have "tight integration with mpi" but the
> > other-way-arround : our MPI libraries are compiled with SLURM
> > support, so when you describe resources at the beginning of batch
> > script, you do not have to use "-np" and "-machinefile" options for
> > mpirun/mpiexec
> >
> >   * this error message " btl_openib_component.c:1699:init_one_device" is
> > caused by "old" mpi library, so please recompile your application
> > (WIEN2k) using openmpi/4.1.0_icc19
> >
> > Now should I compile WIEN2k with SL or LI?
> >
> > Sincerely yours,
> >
> > Leila Mollabashi
> >
> >
> > On Wed, Apr 14, 2021 at 10:34 AM Peter Blaha
> > mailto:pbl...@theochem.tuwien.ac.at>>
> wrote:
> >
> > It cannot initialize an mpi job, because it is missing the interface
> > software.
> >
> > You need to ask the computing center / system administrators how one
> > executes a mpi job on this computer.
> >
> > It could be, that "mpirun" is not supported on this machine. You may
> > try
> > a wien2k installation with  system   "LS"  in siteconfig. This will
> > configure the parallel environment/commands using "slurm" commands
> like
> > srun -K -N_nodes_ -n_NP_  ..., replacing mpirun.
> > We used it once on our hpc machine, since it was recommended by the
> > computing center people. However, it turned out that the standard
> > mpirun
> > installation was more stable because the "slurm controller" died too
> > often leading to many random crashes. Anyway, if your system has
> > what is
> > called "tight integration of mpi", it might be necessary.
> >
> > Am 13.04.2021 um 21:47 schrieb leila mollabashi:
> >  > Dear Prof. Peter Blaha and WIEN2k users,
> >  >
> >  > Then by run x lapw1 –p:
> >  >
> >  > starting parallel lapw1 at Tue Apr 13 21:04:15 CEST 2021
> >  >
> >  > ->  starting parallel LAPW1 jobs at Tue Apr 13 21:04:15 CEST 2021
> >  >
> >  > running LAPW1 in parallel mode (using .machines)
> >  >
> >  > 2 number_of_parallel_jobs
> >  >
> >  > [1] 14530
> >  >
> >  > [e0467:14538] mca_base_component_repository_open: unable to open
> >  > mca_btl_uct: libucp.so.0: cannot open shared object file: No such
> > file
> >  > or directory (ignored)
> >  >
> >  > WARNING: There was an error initializing an OpenFabrics device.
> >  >
> >  >Local host:   e0467
> >  >
> >  >Local device: 

Re: [Wien] MPI error

2021-04-23 Thread leila mollabashi
Dear Prof. Peter Blaha and WIEN2k users,

Thank you for your assistances.

Here it is the admin reply:

   - mpirun/mpiexec command is supported after loadin propper module ( I
   suggest openmpi/4.1.0 with gcc 6.2.0 or icc )
   - you have to describe needed resources (I suggest : --nodes and
   --ntasks-per-node , please use "whole node" , so ntasks-pper-node= 28 or 32
   or 48 , depending of partition)
   - Yes, our cluster have "tight integration with mpi" but the
   other-way-arround : our MPI libraries are compiled with SLURM support, so
   when you describe resources at the beginning of batch script, you do not
   have to use "-np" and "-machinefile" options for mpirun/mpiexec


   - this error message " btl_openib_component.c:1699:init_one_device" is
   caused by "old" mpi library, so please recompile your application (WIEN2k)
   using openmpi/4.1.0_icc19

Now should I compile WIEN2k with SL or LI?

Sincerely yours,

Leila Mollabashi

On Wed, Apr 14, 2021 at 10:34 AM Peter Blaha 
wrote:

> It cannot initialize an mpi job, because it is missing the interface
> software.
>
> You need to ask the computing center / system administrators how one
> executes a mpi job on this computer.
>
> It could be, that "mpirun" is not supported on this machine. You may try
> a wien2k installation with  system   "LS"  in siteconfig. This will
> configure the parallel environment/commands using "slurm" commands like
> srun -K -N_nodes_ -n_NP_  ..., replacing mpirun.
> We used it once on our hpc machine, since it was recommended by the
> computing center people. However, it turned out that the standard mpirun
> installation was more stable because the "slurm controller" died too
> often leading to many random crashes. Anyway, if your system has what is
> called "tight integration of mpi", it might be necessary.
>
> Am 13.04.2021 um 21:47 schrieb leila mollabashi:
> > Dear Prof. Peter Blaha and WIEN2k users,
> >
> > Then by run x lapw1 –p:
> >
> > starting parallel lapw1 at Tue Apr 13 21:04:15 CEST 2021
> >
> > ->  starting parallel LAPW1 jobs at Tue Apr 13 21:04:15 CEST 2021
> >
> > running LAPW1 in parallel mode (using .machines)
> >
> > 2 number_of_parallel_jobs
> >
> > [1] 14530
> >
> > [e0467:14538] mca_base_component_repository_open: unable to open
> > mca_btl_uct: libucp.so.0: cannot open shared object file: No such file
> > or directory (ignored)
> >
> > WARNING: There was an error initializing an OpenFabrics device.
> >
> >Local host:   e0467
> >
> >Local device: mlx4_0
> >
> > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
> >
> > with errorcode 0.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> >
> > You may or may not see output from other processes, depending on
> >
> > exactly when Open MPI kills them.
> >
> >
> --
> >
> > [e0467:14567] 1 more process has sent help message
> > help-mpi-btl-openib.txt / error in device init
> >
> > [e0467:14567] 1 more process has sent help message
> > help-mpi-btl-openib.txt / error in device init
> >
> > [e0467:14567] Set MCA parameter "orte_base_help_aggregate" to 0 to see
> > all help / error messages
> >
> > [warn] Epoll MOD(1) on fd 27 failed.  Old events were 6; read change was
> > 0 (none); write change was 2 (del): Bad file descriptor
> >
> >>Somewhere there should be some documentation how one runs an mpi job on
> > your system.
> >
> > Only I found this:
> >
> > Before ordering a task, it should be encapsulated in an appropriate
> > script understandable for the queue system, e.g .:
> >
> > /home/users/user/submit_script.sl <http://submit_script.sl>
> >
> > Sample SLURM script:
> >
> > #! / bin / bash -l
> >
> > #SBATCH -N 1
> >
> > #SBATCH --mem 5000
> >
> > #SBATCH --time = 20:00:00
> >
> > /sciezka/do/pliku/binarnego/plik_binarny.in <http://plik_binarny.in>>
> > /sciezka/do/pliku/wyjsciowego.out
> >
> > To order a task to a specific queue, use the #SBATCH -p parameter, e.g.
> >
> > #! / bin / bash -l
> >
> > #SBATCH -N 1
> >
> > #SBATCH --mem 5000
> >
> > #SBATCH --time = 20:00:00
> >
> > #SBATCH -p standard
> >
> > /sciezka/do/pliku/binarnego/plik_binarny.in <http://plik_binarny.in>>
> > /siezka/do/pliku/wyjsciowego.out
> >
>

Re: [Wien] MPI error

2021-04-13 Thread leila mollabashi
Dear Prof. Peter Blaha and WIEN2k users,

Then by run x lapw1 –p:

starting parallel lapw1 at Tue Apr 13 21:04:15 CEST 2021

->  starting parallel LAPW1 jobs at Tue Apr 13 21:04:15 CEST 2021

running LAPW1 in parallel mode (using .machines)

2 number_of_parallel_jobs

[1] 14530

[e0467:14538] mca_base_component_repository_open: unable to open
mca_btl_uct: libucp.so.0: cannot open shared object file: No such file or
directory (ignored)

WARNING: There was an error initializing an OpenFabrics device.

  Local host:   e0467

  Local device: mlx4_0

MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD

with errorcode 0.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.

You may or may not see output from other processes, depending on

exactly when Open MPI kills them.

--

[e0467:14567] 1 more process has sent help message help-mpi-btl-openib.txt
/ error in device init

[e0467:14567] 1 more process has sent help message help-mpi-btl-openib.txt
/ error in device init

[e0467:14567] Set MCA parameter "orte_base_help_aggregate" to 0 to see all
help / error messages

[warn] Epoll MOD(1) on fd 27 failed.  Old events were 6; read change was 0
(none); write change was 2 (del): Bad file descriptor

>Somewhere there should be some documentation how one runs an mpi job on
your system.

Only I found this:

Before ordering a task, it should be encapsulated in an appropriate script
understandable for the queue system, e.g .:

/home/users/user/submit_script.sl

Sample SLURM script:

#! / bin / bash -l

#SBATCH -N 1

#SBATCH --mem 5000

#SBATCH --time = 20:00:00



/sciezka/do/pliku/binarnego/plik_binarny.in>
/sciezka/do/pliku/wyjsciowego.out

To order a task to a specific queue, use the #SBATCH -p parameter, e.g.

#! / bin / bash -l

#SBATCH -N 1

#SBATCH --mem 5000

#SBATCH --time = 20:00:00

#SBATCH -p standard



/sciezka/do/pliku/binarnego/plik_binarny.in>
/siezka/do/pliku/wyjsciowego.out

The task must then be ordered using the *sbatch* command

sbatch /home/users/user/submit_script.sl

*Ordering interactive tasks*


Interactive tasks can be divided into two groups:

· interactive task (working in text mode)

· interactive task

*Interactive task (working in text mode)*


Ordering interactive tasks is very simple and in the simplest case it comes
down to issuing the command below.

srun --pty / bin / bash



Sincerely yours,

Leila Mollabashi

On Wed, Apr 14, 2021 at 12:03 AM leila mollabashi 
wrote:

> Dear Prof. Peter Blaha and WIEN2k users,
>
> Thank you for your assistances.
>
> > At least now the error: "lapw0 not found" is gone. Do you understand
> why ??
>
> Yes, I think that because now the path is clearly known.
>
> >How many slots do you get by this srun command ?
>
> Usually I went to node with 28 CPUs.
>
> >Is this the node with the name  e0591  ???
>
> Yes, it is.
>
> >Of course the .machines file must be consistent (dynamically adapted)
>
> with the actual nodename.
>
> Yes, to do this I use my script.
>
> >When I use “srun --pty -n 8 /bin/bash” that goes to the node with 8 free
> cores, and run x lapw0 –p then this happens:
>
> starting parallel lapw0 at Tue Apr 13 20:50:49 CEST 2021
>
>  .machine0 : 4 processors
>
> [1] 12852
>
> [e0467:12859] mca_base_component_repository_open: unable to open
> mca_btl_uct: libucp.so.0: cannot open shared object file: No such file or
> directory (ignored)
>
> [e0467][[56319,1],1][btl_openib_component.c:1699:init_one_device] error
> obtaining device attributes for mlx4_0 errno says Protocol not supported
>
> [e0467:12859] mca_base_component_repository_open: unable to open
> mca_pml_ucx: libucp.so.0: cannot open shared object file: No such file or
> directory (ignored)
>
> LAPW0 END
>
> [1]Done  mpirun -np 4 -machinefile .machine0
> /home/users/mollabashi/v19.2/lapw0_mpi lapw0.def >> .time00
>
> Sincerely yours,
>
> Leila Mollabashi
>
>
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] MPI error

2021-04-13 Thread leila mollabashi
Dear Prof. Peter Blaha and WIEN2k users,

Thank you for your assistances.

> At least now the error: "lapw0 not found" is gone. Do you understand why
??

Yes, I think that because now the path is clearly known.

>How many slots do you get by this srun command ?

Usually I went to node with 28 CPUs.

>Is this the node with the name  e0591  ???

Yes, it is.

>Of course the .machines file must be consistent (dynamically adapted)

with the actual nodename.

Yes, to do this I use my script.

>When I use “srun --pty -n 8 /bin/bash” that goes to the node with 8 free
cores, and run x lapw0 –p then this happens:

starting parallel lapw0 at Tue Apr 13 20:50:49 CEST 2021

 .machine0 : 4 processors

[1] 12852

[e0467:12859] mca_base_component_repository_open: unable to open
mca_btl_uct: libucp.so.0: cannot open shared object file: No such file or
directory (ignored)

[e0467][[56319,1],1][btl_openib_component.c:1699:init_one_device] error
obtaining device attributes for mlx4_0 errno says Protocol not supported

[e0467:12859] mca_base_component_repository_open: unable to open
mca_pml_ucx: libucp.so.0: cannot open shared object file: No such file or
directory (ignored)

LAPW0 END

[1]Done  mpirun -np 4 -machinefile .machine0
/home/users/mollabashi/v19.2/lapw0_mpi lapw0.def >> .time00

Sincerely yours,

Leila Mollabashi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] MPI error

2021-04-12 Thread leila mollabashi
Dear Prof. Peter Blaha and WIEN2k users,

Thank you. Now my .machines file is:

lapw0:e0591:4

1:e0591:4

1:e0591:4

granularity:1

extrafine:1

I have installed WIEN2k in my user in the cluster. When I use this script “srun
--pty /bin/bash” then it goes to one node of the cluster, the “ls -als
$WIENROOT/lapw0”, “x lapw0” and “lapw0 lapw0.def” commands are executed
but, “x lapw0 –p” is not executed. The following error appears:

There are not enough slots available in the system to satisfy the 4

slots that were requested by the application:

  /home/users/mollabashi/v19.2/lapw0_mpi

Either request fewer slots for your application, or make more slots

available for use.

A "slot" is the Open MPI term for an allocatable unit where we can

launch a process.  The number of slots available are defined by the

environment in which Open MPI processes are run:

  1. Hostfile, via "slots=N" clauses (N defaults to number of

 processor cores if not provided)

  2. The --host command line parameter, via a ":N" suffix on the

 hostname (N defaults to 1 if not provided)

  3. Resource manager (e.g., SLURM, PBS/Torque, LSF, etc.)

  4. If none of a hostfile, the --host command line parameter, or an

 RM is present, Open MPI defaults to the number of processor cores

In all the above cases, if you want Open MPI to default to the number

of hardware threads instead of the number of processor cores, use the

--use-hwthread-cpus option.

Alternatively, you can use the --oversubscribe option to ignore the

number of available slots when deciding the number of processes to

launch.

--

[1]Exit 1mpirun -np 4 -machinefile .machine0
/home/users/mollabashi/v19.2/lapw0_mpi lapw0.def >> .time00

0.067u 0.091s 0:02.97 5.0%  0+0k 52272+144io 54pf+0w

mollabashi@eagle:~/test1/cein$ cat .machines

Sincerely yours,

Leila Mollabashi

On Sun, Apr 11, 2021 at 9:40 PM Peter Blaha 
wrote:

> Your script is still wrong.
> The .machines file should show:
>
> lapw0:e0150:4
>
> not
> lapw0:e0150
> :4
>
> Therefore it tries to execute lapw0 instead of lapw0_mpi.
> ---
> Anyway, the first thing is to make the sequential wien2k running. You
> claimed the WIENROOT is known in the batch job.
> Please do:
> ls -als $WIENROOT/lapw0
>
> Does it have execute permission ?
>
> If yes, execute lapw0 explicitly:
>
> x lapw0
>
> and a second time:
>
> lapw0 lapw0.def
>
>
> Am 11.04.2021 um 13:17 schrieb leila mollabashi:
> > Dear Prof. Peter Blaha,
> >
> > Thank you for your guides. You are right. I edited the script and added
> > “source ~/.bashrc, echo 'lapw0:'`hostname`' :'$nproc >> .machines” to it.
> >
> > The crated .machines file is as follows:
> >
> > lapw0:e0150
> >
> > :4
> >
> > 1:e0150:4
> >
> > 1:e0150:4
> >
> > granularity:1
> >
> > extrafine:1
> >
> > The slurm.out file is:
> >
> > e0150
> >
> > # .machines
> >
> > bash: lapw0: command not found
> >
> > real 0m0.001s
> >
> > user 0m0.001s
> >
> > sys 0m0.000s
> >
> > grep: *scf1*: No such file or directory
> >
> > grep: lapw2*.error: No such file or directory
> >
> >>  stop error
> >
> > When I used the following commands:
> >
> > echo $WIENROOT
> > which lapw0
> > which lapw0_mpi
> >
> > The following paths were printed:
> >
> > /home/users/mollabashi/v19.2
> >
> > /home/users/mollabashi/v19.2/lapw0
> >
> > /home/users/mollabashi/v19.2/lapw0_mpi
> >
> > But the error is still exists:
> >
> > bash: lapw0: command not found
> >
> > When I used your script in (faq page), one time the .machines file was
> > generated.
> >
> > But it stopped due to an error.
> >
> > test.scf1_1: No such file or directory.
> >
> > grep: *scf1*: No such file or directory
> >
> > FERMI - Error
> >
> > When I loaded openmpi and ifort as well as icc in the script this error
> > appeared:
> >
> >>SLURM_NTASKS_PER_NODE:  Undefined variable.
> >
> > Every time after that the
> >
> >>SLURM_NTASKS_PER_NODE:  Undefined variable
> >
> >   error happened when I used your scripts without changing it. I have
> > tried several times even in a new directory with no positive effect.
> >
> >>SLURM_NTASKS_PER_NODE:  Undefined variable.
> >
> > Sincerely yours

[Wien] MPI error

2021-04-11 Thread leila mollabashi
Dear Prof. Peter Blaha,

Thank you for your guides. You are right. I edited the script and added
“source ~/.bashrc, echo 'lapw0:'`hostname`' :'$nproc >> .machines” to it.

The crated .machines file is as follows:

lapw0:e0150

:4

1:e0150:4

1:e0150:4

granularity:1

extrafine:1

The slurm.out file is:

e0150

# .machines

bash: lapw0: command not found

real0m0.001s

user0m0.001s

sys 0m0.000s

grep: *scf1*: No such file or directory

grep: lapw2*.error: No such file or directory

>   stop error

When I used the following commands:

echo $WIENROOT
which lapw0
which lapw0_mpi

The following paths were printed:

/home/users/mollabashi/v19.2

/home/users/mollabashi/v19.2/lapw0

/home/users/mollabashi/v19.2/lapw0_mpi

But the error is still exists:

bash: lapw0: command not found

When I used your script in (faq page), one time the .machines file was
generated.

But it stopped due to an error.

test.scf1_1: No such file or directory.

grep: *scf1*: No such file or directory

FERMI - Error

When I loaded openmpi and ifort as well as icc in the script this error
appeared:

>SLURM_NTASKS_PER_NODE: Undefined variable.

Every time after that the

>SLURM_NTASKS_PER_NODE: Undefined variable

 error happened when I used your scripts without changing it. I have tried
several times even in a new directory with no positive effect.

>SLURM_NTASKS_PER_NODE: Undefined variable.

Sincerely yours,

Leila Mollabashi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] test

2021-04-10 Thread leila mollabashi
This is a test e-mail to check whether my e-mail can be sent to the mailing
list.
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] Error in MPI run

2021-03-29 Thread leila mollabashi
Leila Mollabashi

Dear Prof. Laurence Marks

Thank you for your kindly reply.

>Presumably you have not exported WIENROOT when you started your job, and/or

it is not exported by openmpi. Check how to use mpi on your system

including exporting PATH.

Since I have config WIEN2k correctly the WIENROOT has exported.  So I think
openmpi is not exported.

I also contacted the admin but the problem is still not solved

Sincerely yours,

Leila Mollabashi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] Error in MPI run

2021-03-28 Thread leila mollabashi
  4. If none of a hostfile, the --host command line parameter, or an

 RM is present, Open MPI defaults to the number of processor cores

In all the above cases, if you want Open MPI to default to the number

of hardware threads instead of the number of processor cores, use the

--use-hwthread-cpus option.

Alternatively, you can use the --oversubscribe option to ignore the

number of available slots when deciding the number of processes to

launch.

--

[1]  + Done  ( cd $PWD; $t $ttt; rm -f
.lock_$lockfile[$p] ) >> .time1_$loop

ce.scf1_1: No such file or directory.

grep: *scf1*: No such file or directory

LAPW2 - Error. Check file lapw2.error

cp: cannot stat ‘.in.tmp’: No such file or directory

grep: *scf1*: No such file or directory

>   stop error

Would you please kindly guide me?

Sincerely yours,

Leila Mollabashi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] about cfp

2014-12-07 Thread leila mollabashi
Dear wien2k's users,

I am interested in the cfp software by Pavel Novak. On the unsupported
software goodies said that this software calculates crystal field
parameters in rare-earth systems. I want to know can I use it for other
systems such as 5f or d electrons?

Thank you,
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html