Re: [Wien] error in running .machines file

2018-06-28 Thread Peter Blaha
At least on our SLURM system, mpirun is not supported and you have to 
use srun instead.
In siteconfig, there is even an option "ifort + slurm"; why not trying 
to use these defaults ?


Am 29.06.2018 um 00:23 schrieb venkatesh chandragiri:

Dear Wien2k users,

I have forwarded the suggestions given by Prof. Gavin as well as Prof. 
Marks to the cluster administrator and now it seems that those earlier 
errors was rectified. However, there are still more errors coming out 
when I am submitting my job into SLURM based queuing process and still 
lapw0 runs successfully but lapw1 crashes.



==error=
==uplapw1.error
   1**  Error in Parallel LAPW1
   2 **  LAPW1 STOPPED at Thu Jun 28 09:20:58 CST 2018
   3 **  check ERROR FILES!
   4  'INILPW' - can't open unit:  18
   5  'INILPW' -        filename: MnSb2.vspup
   6  'INILPW' -          status: old          form: formatted
   7  'LAPW1' - INILPW aborted unsuccessfully.
=

output
*30 begin time is Thu Jun 28 09:20:50 CST 2018*
* 31 mpirun: Command not found.*
* 32 setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid 
argument*

* 33 LAPW1 - Error*
* 34 setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid 
argument*

* 35 LAPW1 - Error*
* 36 setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid 
argument*

* 37 LAPW1 - Error*

===

jobscript ==

After the #SBATCH commands and bash script for .machines

the commands for runsp_lapw is given below

109 wien2k=`runsp_lapw -NI -i 200 -ec 0.1 -cc 0.0001 -p`
110 #yhrun -N 1 -p  sz-renwei -n 24 $wien2k
111 #srun $wien2k
112 mpirun -n 24 -ppn 24 runsp_lapw -NI -i 200 -ec 0.1 -cc 0.0001 -p
113 #mpirun -np 24 $wien2k

==

we tried to run different set of commands for runsp_lapw. All leads to 
the same error as speified in the output


==
[renwei@ln3%th2 ~]$ which mpirun
/usr/local/mpi3/bin/mpirun
[renwei@ln3%th2 ~]$ whereis mpirun
mpirun: /opt/mpich/bin/mpirun
=

Is there somthing I am missing to include in mpirun command for SLURM 
queuing system.


please help to find out where the problem is

thank you

with best regards

venkatesh





___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html



--
--
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: bl...@theochem.tuwien.ac.atWIEN2k: http://www.wien2k.at
WWW: 
http://www.imc.tuwien.ac.at/tc_blaha- 


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] error in running .machines file

2018-06-28 Thread Gavin Abo

[renwei@ln3%th2 ~]$ which mpirun
/usr/local/mpi3/bin/mpirun
*31 mpirun: Command not found.*


Similar to the "manpath: command not found" and "libmpi.so.12 => not 
found" errors that you say are gone now, the mpirun seems to be 
installed okay on one of the nodes but it seems like it is not installed 
on all of the nodes such that the cluster administrator should be able 
to rectify that too.



112 mpirun -n 24 -ppn 24 runsp_lapw -NI -i 200 -ec 0.1 -cc 0.0001 -p


Regarding the above line used in your job script, it may be that using 
mpirun to execute "runsp_lapw -p", where "runsp_lapw -p" will execute 
mpirun again might be problematic.


Though, I'm not an expert on slurm, so I could be wrong.  The job 
scripts I have seen for other clusters would have used a line without 
the "mpirun", e.g.:


runsp_lapw -NI -i 200 -ec 0.1 -cc 0.0001 -p

If you need to adjust the mpirun parameters, I believe you can do that 
in siteconfig (or by manually editing the WIEN2k_OPTIONS file).


For example, one way possible way to check what your WIEN2k mpirun 
settings are should be to use the following terminal command:


username@computername:~/Desktop$ grep MPIRUN $WIENROOT/WIEN2k_OPTIONS
current:MPIRUN:mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] error in running .machines file

2018-06-28 Thread Laurence Marks
You will need to talk more to your sysadmin (maybe show this email).

The simple one is the line
*31 mpirun: Command not found.*

This means what it says -- mpirun is not in your PATH and/or some other
command is needed. This is being set in $WIENROOT/mpirun.

The other one
*32 setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid
argument*

is not so simple, and perhaps your sysadmin can contact the list (or me)
directly. This may not matter (it is only a warning), although it would be
good to cure. How to cure it depends upon how setrlimit has been
implemented on your system, which nobody can tell you except your admin.

On Thu, Jun 28, 2018 at 5:23 PM, venkatesh chandragiri <
venkyphysicsi...@gmail.com> wrote:

> Dear Wien2k users,
>
> I have forwarded the suggestions given by Prof. Gavin as well as Prof.
> Marks to the cluster administrator and now it seems that those earlier
> errors was rectified. However, there are still more errors coming out when
> I am submitting my job into SLURM based queuing process and still lapw0
> runs successfully but lapw1 crashes.
>
>
> ==error=
> ==uplapw1.error
>   1**  Error in Parallel LAPW1
>   2 **  LAPW1 STOPPED at Thu Jun 28 09:20:58 CST 2018
>   3 **  check ERROR FILES!
>   4  'INILPW' - can't open unit:  18
>
>   5  'INILPW' -filename: MnSb2.vspup
>
>   6  'INILPW' -  status: old  form: formatted
>
>   7  'LAPW1' - INILPW aborted unsuccessfully.
> =
>
> output
> *30 begin time is Thu Jun 28 09:20:50 CST 2018*
> * 31 mpirun: Command not found.*
> * 32 setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid
> argument*
> * 33 LAPW1 - Error*
> * 34 setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid
> argument*
> * 35 LAPW1 - Error*
> * 36 setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid
> argument*
> * 37 LAPW1 - Error*
>
> ===
>
> jobscript ==
>
> After the #SBATCH commands and bash script for .machines
>
> the commands for runsp_lapw is given below
>
> 109 wien2k=`runsp_lapw -NI -i 200 -ec 0.1 -cc 0.0001 -p`
> 110 #yhrun -N 1 -p  sz-renwei -n 24 $wien2k
> 111 #srun $wien2k
> 112 mpirun -n 24 -ppn 24 runsp_lapw -NI -i 200 -ec 0.1 -cc 0.0001 -p
> 113 #mpirun -np 24 $wien2k
>
> ==
>
> we tried to run different set of commands for runsp_lapw. All leads to the
> same error as speified in the output
>
> ==
> [renwei@ln3%th2 ~]$ which mpirun
> /usr/local/mpi3/bin/mpirun
> [renwei@ln3%th2 ~]$ whereis mpirun
> mpirun: /opt/mpich/bin/mpirun
> =
>
> Is there somthing I am missing to include in mpirun command for SLURM
> queuing system.
>
> please help to find out where the problem is
>
> thank you
>
> with best regards
>
> venkatesh
>
>
>
>


-- 
Professor Laurence Marks
"Research is to see what everybody else has seen, and to think what nobody
else has thought", Albert Szent-Gyorgi
www.numis.northwestern.edu ; Corrosion in 4D: MURI4D.numis.northwestern.edu
Partner of the CFW 100% program for gender equity, www.cfw.org/100-percent
Co-Editor, Acta Cryst A
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] error in running .machines file

2018-06-28 Thread venkatesh chandragiri
Dear Wien2k users,

I have forwarded the suggestions given by Prof. Gavin as well as Prof.
Marks to the cluster administrator and now it seems that those earlier
errors was rectified. However, there are still more errors coming out when
I am submitting my job into SLURM based queuing process and still lapw0
runs successfully but lapw1 crashes.


==error=
==uplapw1.error
  1**  Error in Parallel LAPW1
  2 **  LAPW1 STOPPED at Thu Jun 28 09:20:58 CST 2018
  3 **  check ERROR FILES!
  4  'INILPW' - can't open unit:  18

  5  'INILPW' -filename: MnSb2.vspup

  6  'INILPW' -  status: old  form: formatted

  7  'LAPW1' - INILPW aborted unsuccessfully.
=

output
*30 begin time is Thu Jun 28 09:20:50 CST 2018*
* 31 mpirun: Command not found.*
* 32 setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid
argument*
* 33 LAPW1 - Error*
* 34 setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid
argument*
* 35 LAPW1 - Error*
* 36 setrlimit(): WARNING: Cannot raise stack limit, continuing: Invalid
argument*
* 37 LAPW1 - Error*

===

jobscript ==

After the #SBATCH commands and bash script for .machines

the commands for runsp_lapw is given below

109 wien2k=`runsp_lapw -NI -i 200 -ec 0.1 -cc 0.0001 -p`
110 #yhrun -N 1 -p  sz-renwei -n 24 $wien2k
111 #srun $wien2k
112 mpirun -n 24 -ppn 24 runsp_lapw -NI -i 200 -ec 0.1 -cc 0.0001 -p
113 #mpirun -np 24 $wien2k

==

we tried to run different set of commands for runsp_lapw. All leads to the
same error as speified in the output

==
[renwei@ln3%th2 ~]$ which mpirun
/usr/local/mpi3/bin/mpirun
[renwei@ln3%th2 ~]$ whereis mpirun
mpirun: /opt/mpich/bin/mpirun
=

Is there somthing I am missing to include in mpirun command for SLURM
queuing system.

please help to find out where the problem is

thank you

with best regards

venkatesh
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] error in running .machines file

2018-06-16 Thread Gavin Abo
The "ssh cn308 ldd $WIENROOT/lapw0_mpi" is finding files for your ifort 
installation like 
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_scalapack_lp64.so 
just fine.  So your environmental variables seem to be setup and working 
fine on both nodes.  It looks like the 
/opt/intel/impi/5.0.2.044/intel64/lib/libmpifort.so.12 exists on the 
renwei node but not on the cn308 node.    It looks to me that Intel MPI 
(impi) is not installed on the cn308 node.


Perhaps the cn308 node is using a different partition or different 
shared drive.  I have read that there are different possible solutions 
for the slurm cluster problem you seem to have which depend on how it is 
configured [ 
https://lists.schedmd.com/pipermail/slurm-users/2017-December/000272.html 
].  You might be able to check which partition the renwei node and cn308 
node are using with sinfo [ https://slurm.schedmd.com/sinfo.html ].


Maybe you just have to have your cluster manager (administrator, help 
desk, ...) install impi like what you did for ifort.  To remove the 
"manpath: command not found", the cluster manager probably just has to 
install the man or man-db package on the cn308 node (they should be able 
to check the documentation or forums for the OS that their cluster is 
using on how to install manpath, typically for example: yum install man 
or apt-get install man-db).  I have never performed administration 
functions of a slurm cluster, so for additional help with your problem 
you may have to ask a slurm expert (e.g., your cluster manager or the 
slurm mailing list [ https://slurm.schedmd.com/mail.html ]).


On 6/16/2018 4:28 AM, venkatesh chandragiri wrote:


Dear Prof. Marks,

I did "ssh othernode ldd $WIENROOT/lapw0_mpi".

=

[renwei@ln3 ~]$  ssh cn308 ldd $WIENROOT/lapw0_mpi
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh: line 
118: manpath: command not found

    linux-vdso.so.1 =>  (0x7fffd8fff000)
    libfftw3_mpi.so.3 => 
/THFS/home/renwei/venky/soft/fftw/lib/libfftw3_mpi.so.3 
(0x7fd41621d000)
    libmkl_scalapack_lp64.so => 
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_scalapack_lp64.so 
(0x7fd415947000)
    libmkl_blacs_intelmpi_lp64.so => 
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so 
(0x7fd41570a000)
    libfftw3.so.3 => 
/THFS/home/renwei/venky/soft/fftw/lib/libfftw3.so.3 (0x7fd4153fe000)
    libmkl_intel_lp64.so => 
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_intel_lp64.so 
(0x7fd414cb)
    libmkl_intel_thread.so => 
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_intel_thread.so 
(0x7fd413c9)
    libmkl_core.so => 
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_core.so 
(0x7fd41259c000)

    libpthread.so.0 => /lib64/libpthread.so.0 (0x7fd41238)
*_    libmpifort.so.12 => not found
    libmpi.so.12 => not found_*
    libdl.so.2 => /lib64/libdl.so.2 (0x7fd412172000)
    librt.so.1 => /lib64/librt.so.1 (0x7fd411f69000)
    libm.so.6 => /lib64/libm.so.6 (0x7fd411ce5000)
    libiomp5.so => 
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libiomp5.so 
(0x7fd4119ca000)

    libc.so.6 => /lib64/libc.so.6 (0x7fd411628000)
    libgcc_s.so.1 => 
/THFS/home/sh-hzw2/software/Matlab2014a//sys/os/glnxa64/libgcc_s.so.1 
(0x7fd411413000)
    libimf.so => 
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libimf.so 
(0x7fd410f5)
    libsvml.so => 
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libsvml.so 
(0x7fd410354000)
    libirng.so => 
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libirng.so 
(0x7fd41014d000)
    libintlc.so.5 => 
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libintlc.so.5 
(0x7fd40fef7000)

    /lib64/ld-linux-x86-64.so.2 (0x7fd416436000)

=

As it is shown here *_    libmpifort.so.12 => not found, 
libmpi.so.12 => not found when I run in cn308 node_*


But these have well defined paths when run ldd at "renwei"

    libmpifort.so.12 => 
/opt/intel/impi/5.0.2.044/intel64/lib/libmpifort.so.12 
 (0x2b3a37c98000)
    libmpi.so.12 => 
/opt/intel/impi/5.0.2.044/intel64/lib/libmpi.so.12 
 (0x2b3a37f21000)


===

[renwei@ln3 ~]$ ssh cn308 $WIENROOT/lapw0_mpi
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh: line 
118: manpath: command not found
/THFS/home/renwei/venky/soft/wien2k/lapw0_mpi: error while loading 
shared libraries: libmpifort.so.12: cannot open shared object file: No 
such file or directory

[renwei@ln3 ~]$


===

[renwei@ln3 ~]$ ssh cn308
Last login: Sat Jun 16 17:59:04 2018 from ln3-gn0
-bash: manpath: command not found
[renwei@cn308 ~]$ 

Re: [Wien] error in running .machines file

2018-06-16 Thread venkatesh chandragiri
Dear Prof. Marks,

I did "ssh othernode ldd $WIENROOT/lapw0_mpi".

=

[renwei@ln3 ~]$  ssh cn308 ldd $WIENROOT/lapw0_mpi
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh: line 118:
manpath: command not found
linux-vdso.so.1 =>  (0x7fffd8fff000)
libfftw3_mpi.so.3 =>
/THFS/home/renwei/venky/soft/fftw/lib/libfftw3_mpi.so.3 (0x7fd41621d000)
libmkl_scalapack_lp64.so =>
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_scalapack_lp64.so
(0x7fd415947000)
libmkl_blacs_intelmpi_lp64.so =>
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so
(0x7fd41570a000)
libfftw3.so.3 =>
/THFS/home/renwei/venky/soft/fftw/lib/libfftw3.so.3 (0x7fd4153fe000)
libmkl_intel_lp64.so =>
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_intel_lp64.so
(0x7fd414cb)
libmkl_intel_thread.so =>
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_intel_thread.so
(0x7fd413c9)
libmkl_core.so =>
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_core.so
(0x7fd41259c000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7fd41238)

*libmpifort.so.12 => not foundlibmpi.so.12 => not found*
libdl.so.2 => /lib64/libdl.so.2 (0x7fd412172000)
librt.so.1 => /lib64/librt.so.1 (0x7fd411f69000)
libm.so.6 => /lib64/libm.so.6 (0x7fd411ce5000)
libiomp5.so =>
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libiomp5.so
(0x7fd4119ca000)
libc.so.6 => /lib64/libc.so.6 (0x7fd411628000)
libgcc_s.so.1 =>
/THFS/home/sh-hzw2/software/Matlab2014a//sys/os/glnxa64/libgcc_s.so.1
(0x7fd411413000)
libimf.so =>
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libimf.so
(0x7fd410f5)
libsvml.so =>
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libsvml.so
(0x7fd410354000)
libirng.so =>
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libirng.so
(0x7fd41014d000)
libintlc.so.5 =>
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libintlc.so.5
(0x7fd40fef7000)
/lib64/ld-linux-x86-64.so.2 (0x7fd416436000)

=

As it is shown here *libmpifort.so.12 => not found,
libmpi.so.12 => not found when I run in cn308 node*

But these have well defined paths when run ldd at "renwei"

libmpifort.so.12 => /opt/intel/impi/
5.0.2.044/intel64/lib/libmpifort.so.12 (0x2b3a37c98000)
libmpi.so.12 => /opt/intel/impi/5.0.2.044/intel64/lib/libmpi.so.12
(0x2b3a37f21000)

===

[renwei@ln3 ~]$ ssh cn308 $WIENROOT/lapw0_mpi
/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh: line 118:
manpath: command not found
/THFS/home/renwei/venky/soft/wien2k/lapw0_mpi: error while loading shared
libraries: libmpifort.so.12: cannot open shared object file: No such file
or directory
[renwei@ln3 ~]$


===

[renwei@ln3 ~]$ ssh cn308
Last login: Sat Jun 16 17:59:04 2018 from ln3-gn0
-bash: manpath: command not found
[renwei@cn308 ~]$ $WIENROOT/lapw0_mpi
/THFS/home/renwei/venky/soft/wien2k/lapw0_mpi: error while loading shared
libraries: libmpifort.so.12: cannot open shared object file: No such file
or directory


**

You also mentioned to use " use static compilation". I don't understand
this. do you meant to be static compilation of wien2k..? how I can do it (I
am sorry to ask this, as I belongs to experimental background I don't come
across these kind of issues).


thank you.

venkatesh
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] error in running .machines file

2018-06-16 Thread Peter Blaha

cd $WIENROOT
edit parallele_options and setUSE_REMOTE and MPI_REMOTE to zero.

Then there is no ssh anymore. (But you can use only one node for k-parallel)

Regards

Am 16.06.2018 um 12:02 schrieb venkatesh chandragiri:

Dear Prof. Gavin,

I am using slurm based environment for running the jobs. I have attached 
the typical script I made to submit the job. Although, I kept export & 
source of  LD_LIBRARY_PATH and path to the compilervars.sh, I have also 
source them again by keeping them in separate "myenev" file.


===
#!/bin/bash -l
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --job-name=MnSb2
#SBATCH --output=out%j.txt
#SBATCH --uid=renwei
#SBATCH --partition=sz-renwei
export OMP_NUM_THREADS=1

export PATH="/THFS/home/renwei/softwares/anaconda2/bin:$PATH"
export 
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64
export 
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64
export 
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/home/renwei/venky/soft/libxc/lib
export 
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/home/renwei/venky/soft/fftw/lib
export 
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/impi/5.0.2.044/intel64/lib 


export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/lib64
export WIENROOT=/THFS/home/renwei/venky/soft/wien2k
source /THFS/opt/intel/composer_xe_2013_sp1.3.174/bin/compilervars.sh 
intel64

source /THFS/opt/intel/composer_xe_2013_sp1.3.174/bin/ifortvars.sh intel64
source /THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh intel64
source /THFS/opt/intel/impi/5.0.2.044/intel64/bin/mpivars.sh 
 intel64


source myenev

===
script to generate .machines file
=

wien2k=`runsp_lapw -NI -i 200 -ec 0.1 -cc 0.0001 -p`

srun $wien2k

===


The calculations are running on user account named "renwei" and we have 
a group of students who are using the same account by creating separate 
folders into it. Wien2k was installed in my local folder 
"venky/soft/wien2k" and calculations are doing from "venky/wien2k_sim/MnSb".


This renwei account already contain the .ssh folder . This folder have 
both "id_rsa.pub" and "authorized_keys" files. The content of id_rsa.pub 
file is already copied into authorized_keys file.


After following your statement in earlier mail , the permission to the 
authorized_keys was look like


-rw-r-  authorized_keys
-rw-r--r--  id_rsa.pub

I did ssh of one of the node, it do not prompt me to password as given 
below.


[renwei@ln3 ~]$ ssh cn308
Last login: Sat Jun 16 01:20:03 2018 from ln3-gn0
-bash: manpath: command not found
[renwei@cn308 ~]$


Now after doing all these, the error still persists.


venkatesh



___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html



--
--
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: bl...@theochem.tuwien.ac.atWIEN2k: http://www.wien2k.at
WWW: 
http://www.imc.tuwien.ac.at/tc_blaha- 


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] error in running .machines file

2018-06-16 Thread venkatesh chandragiri
Dear Prof. Gavin,

I am using slurm based environment for running the jobs. I have attached
the typical script I made to submit the job. Although, I kept export &
source of  LD_LIBRARY_PATH and path to the compilervars.sh, I have also
source them again by keeping them in separate "myenev" file.

===
#!/bin/bash -l
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --job-name=MnSb2
#SBATCH --output=out%j.txt
#SBATCH --uid=renwei
#SBATCH --partition=sz-renwei
export OMP_NUM_THREADS=1

export PATH="/THFS/home/renwei/softwares/anaconda2/bin:$PATH"
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/home/renwei/venky/soft/libxc/lib
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/home/renwei/venky/soft/fftw/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/impi/
5.0.2.044/intel64/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/lib64
export WIENROOT=/THFS/home/renwei/venky/soft/wien2k
source /THFS/opt/intel/composer_xe_2013_sp1.3.174/bin/compilervars.sh
intel64
source /THFS/opt/intel/composer_xe_2013_sp1.3.174/bin/ifortvars.sh intel64
source /THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh intel64
source /THFS/opt/intel/impi/5.0.2.044/intel64/bin/mpivars.sh intel64

source myenev

===
script to generate .machines file
=

wien2k=`runsp_lapw -NI -i 200 -ec 0.1 -cc 0.0001 -p`

srun $wien2k

===


The calculations are running on user account named "renwei" and we have a
group of students who are using the same account by creating separate
folders into it. Wien2k was installed in my local folder
"venky/soft/wien2k" and calculations are doing from "venky/wien2k_sim/MnSb".

This renwei account already contain the .ssh folder . This folder have both
"id_rsa.pub" and "authorized_keys" files. The content of id_rsa.pub file is
already copied into authorized_keys file.

After following your statement in earlier mail , the permission to the
authorized_keys was look like

-rw-r-  authorized_keys
-rw-r--r--  id_rsa.pub

I did ssh of one of the node, it do not prompt me to password as given
below.

[renwei@ln3 ~]$ ssh cn308
Last login: Sat Jun 16 01:20:03 2018 from ln3-gn0
-bash: manpath: command not found
[renwei@cn308 ~]$


Now after doing all these, the error still persists.


venkatesh
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] error in running .machines file

2018-06-15 Thread Laurence Marks
Gavin has already answered in large part, but let me add a little. You have
two questions/issues which are probably unrelated:
***
1) Is my ssh working?
I suggest that you test this by itself, for instance by running "ssh
othernode" from your head node, and also execute some simple commands that
way, e.g. "ssh othernode ls" and (important) "ssh othernode ldd
$WIENROOT/lapw0_mpi".
***
2) Are the relevant libraries etc on "othernode", and how do I set this up?
This is tricky, and the answer almost certainly is specific to your
cluster. I have been told that there are "security issues" with various
commands that are remotely executed or batch commands. You will need to
find out how to export the relevant environment so lapw0 etc know. At the
moment, even though you are setting up everything on your head node, the
lapw0_mpi running remotely does not have this information.

N.B., sometimes you can change the linking options and use static
compilation, which avoids many or all of the issues in 2).

_
Professor Laurence Marks
"Research is to see what everybody else has seen, and to think what nobody
else has thought", Albert Szent-Gyorgi
www.numis.northwestern.edu

On Fri, Jun 15, 2018, 4:03 AM venkatesh chandragiri <
venkyphysicsi...@gmail.com> wrote:

> Dear Prof. Laurence Marks,
>
> thanks for your reply. As pointed in my mail I kept all the variable paths
> to bashrc file as well as in the jobscript file. Also I did the "ldd
> lapw1c_mpi"
>
> the output is ==
>
> [renwei@ln3 ~/venky/soft/wien2k]$ ldd lapw1c_mpi
> linux-vdso.so.1 =>  (0x7ffdad1ea000)
> libfftw3_mpi.so.3 =>
> /THFS/home/renwei/venky/soft/fftw/lib/libfftw3_mpi.so.3 (0x2b621871d000)
> libmkl_scalapack_lp64.so =>
> /opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_scalapack_lp64.so
> (0x2b6218934000)
> libmkl_blacs_intelmpi_lp64.so =>
> /opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so
> (0x2b621920b000)
> libfftw3.so.3 =>
> /THFS/home/renwei/venky/soft/fftw/lib/libfftw3.so.3 (0x2b6219447000)
> libmkl_intel_lp64.so =>
> /opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_intel_lp64.so
> (0x2b6219753000)
> libmkl_intel_thread.so =>
> /opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_intel_thread.so
> (0x2b6219ea2000)
> libmkl_core.so =>
> /opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_core.so
> (0x2b621aec1000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x003cbce0)
> *libmpifort.so.12 =>
> /opt/intel/impi/5.0.2.044/intel64/lib/libmpifort.so.12
> 
> (0x2b621c5d8000)*
> libmpi.so.12 => /opt/intel/impi/5.0.2.044/intel64/lib/libmpi.so.12
> 
> (0x2b621c861000)
> libdl.so.2 => /lib64/libdl.so.2 (0x003cbca0)
> librt.so.1 => /lib64/librt.so.1 (0x003cbd60)
> libm.so.6 => /lib64/libm.so.6 (0x003cbc20)
> libiomp5.so =>
> /opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libiomp5.so
> (0x2b621cfd5000)
> libc.so.6 => /lib64/libc.so.6 (0x003cbc60)
> libgcc_s.so.1 =>
> /THFS/home/sh-hzw2/software/Matlab2014a//sys/os/glnxa64/libgcc_s.so.1
> (0x2b621d2f1000)
> libimf.so =>
> /opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libimf.so
> (0x2b621d506000)
> libsvml.so =>
> /opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libsvml.so
> (0x2b621d9ca000)
> libirng.so =>
> /opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libirng.so
> (0x2b621e5c5000)
> libintlc.so.5 =>
> /opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libintlc.so.5
> (0x2b621e7cc000)
> /lib64/ld-linux-x86-64.so.2 (0x003cbbe0)
>
> =
>
> As I highlighted above *libmpifort.so.*12 has well defined path. do you
> have any comment on this.
>
>
> Further, you are pointed out that "the shared libraries are not present on
> the computer you are connecting to via ssh and/or the path has not been
> exported ". I already copied the key in the
> id_rsa.pub file to authorized_keys file (both are in the same server where
> wien2k installed)  to make the password free ssh.
>
> [renwei@ln3 ~/.ssh]$ ls -la
> total 32
> drwx--  2 renwei renwei  4096 Jun 15 04:10 .
> drwx-- 44 renwei renwei  4096 Jun 15 04:10 ..
> -rw---  1 renwei 

Re: [Wien] error in running .machines file

2018-06-15 Thread Gavin Abo


As pointed in my mail I kept all the variable paths to bashrc file as 
well as in the jobscript file.


You didn't mention you were using a job script.  I assumed you weren't.  
That may be important.  Some queue systems require a flag in the job 
script to export the environmental variables correctly like -V for PBS [ 
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg16338.html 
].  If you are using something other than PBS, you will have to check 
the documentation for your queue system to see if automatically 
propagates your .bashrc settings or if you have to add a flag or 
something to do so.  I believe many of queues systems have their mailing 
list or forums were you can ask experts for that specific queue system 
if the information is not easy to find in their documentation.



I already copied the key in the
id_rsa.pub file to authorized_keys file (both are in the same server 
where wien2k installed)  to make the password free ssh.


[renwei@ln3 ~/.ssh]$ ls -la
total 32
drwx--  2 renwei renwei  4096 Jun 15 04:10 .
drwx-- 44 renwei renwei  4096 Jun 15 04:10 ..
-rw---  1 renwei renwei  1200 May 19 13:46 authorized_keys
-rw---  1 renwei renwei  1675 May 17 13:44 id_rsa
-rw-r--r--  1 renwei renwei   392 May 17 13:44 id_rsa.pub
-rw-r--r--  1 renwei renwei 11641 Jun 14 10:45 known_hosts

do I need to change the permission between the authorized_keys & 
id_rsa.pub ...?


At 
https://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ 
:


chmod 640 is used for authorized_keys which should give:

-rw-r-

The r for the group owning the file [ 
https://www.comentum.com/unix-osx-permissions.html ] is different from 
what you have.  What you have may be fine, but you want to try changing 
it using the chmod 640 to be safe.


To get it working, you also may need to copy the SSH key to all of the 
compute nodes.  Did you do that?  I believe that can be done with either 
cat [ 
https://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ 
] or ssh-copy-id [ https://www.ssh.com/ssh/copy-id ].




___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] error in running .machines file

2018-06-15 Thread venkatesh chandragiri
Dear Prof. Laurence Marks,

thanks for your reply. As pointed in my mail I kept all the variable paths
to bashrc file as well as in the jobscript file. Also I did the "ldd
lapw1c_mpi"

the output is ==

[renwei@ln3 ~/venky/soft/wien2k]$ ldd lapw1c_mpi
linux-vdso.so.1 =>  (0x7ffdad1ea000)
libfftw3_mpi.so.3 =>
/THFS/home/renwei/venky/soft/fftw/lib/libfftw3_mpi.so.3 (0x2b621871d000)
libmkl_scalapack_lp64.so =>
/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_scalapack_lp64.so
(0x2b6218934000)
libmkl_blacs_intelmpi_lp64.so =>
/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so
(0x2b621920b000)
libfftw3.so.3 =>
/THFS/home/renwei/venky/soft/fftw/lib/libfftw3.so.3 (0x2b6219447000)
libmkl_intel_lp64.so =>
/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_intel_lp64.so
(0x2b6219753000)
libmkl_intel_thread.so =>
/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_intel_thread.so
(0x2b6219ea2000)
libmkl_core.so =>
/opt/intel/composer_xe_2013_sp1.3.174/mkl/lib/intel64/libmkl_core.so
(0x2b621aec1000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x003cbce0)
*libmpifort.so.12 =>
/opt/intel/impi/5.0.2.044/intel64/lib/libmpifort.so.12
 (0x2b621c5d8000)*
libmpi.so.12 => /opt/intel/impi/5.0.2.044/intel64/lib/libmpi.so.12
(0x2b621c861000)
libdl.so.2 => /lib64/libdl.so.2 (0x003cbca0)
librt.so.1 => /lib64/librt.so.1 (0x003cbd60)
libm.so.6 => /lib64/libm.so.6 (0x003cbc20)
libiomp5.so =>
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libiomp5.so
(0x2b621cfd5000)
libc.so.6 => /lib64/libc.so.6 (0x003cbc60)
libgcc_s.so.1 =>
/THFS/home/sh-hzw2/software/Matlab2014a//sys/os/glnxa64/libgcc_s.so.1
(0x2b621d2f1000)
libimf.so =>
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libimf.so
(0x2b621d506000)
libsvml.so =>
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libsvml.so
(0x2b621d9ca000)
libirng.so =>
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libirng.so
(0x2b621e5c5000)
libintlc.so.5 =>
/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libintlc.so.5
(0x2b621e7cc000)
/lib64/ld-linux-x86-64.so.2 (0x003cbbe0)

=

As I highlighted above *libmpifort.so.*12 has well defined path. do you
have any comment on this.


Further, you are pointed out that "the shared libraries are not present on
the computer you are connecting to via ssh and/or the path has not been
exported ". I already copied the key in the
id_rsa.pub file to authorized_keys file (both are in the same server where
wien2k installed)  to make the password free ssh.

[renwei@ln3 ~/.ssh]$ ls -la
total 32
drwx--  2 renwei renwei  4096 Jun 15 04:10 .
drwx-- 44 renwei renwei  4096 Jun 15 04:10 ..
-rw---  1 renwei renwei  1200 May 19 13:46 authorized_keys
-rw---  1 renwei renwei  1675 May 17 13:44 id_rsa
-rw-r--r--  1 renwei renwei   392 May 17 13:44 id_rsa.pub
-rw-r--r--  1 renwei renwei 11641 Jun 14 10:45 known_hosts

do I need to change the permission between the authorized_keys & id_rsa.pub
...?

thanks

venkatesh
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] error in running .machines file

2018-06-14 Thread Laurence Marks
If I understand you email correctly, the lines

"/THFS/home/renwei/venky/soft/wien2k/lapw0_mpi: error while loading shared
libraries: libmpifort.so.12: cannot open shared object file: No such file
or directory
 /THFS/home/renwei/venky/soft/wien2k/lapw0_mpi: error while loading shared
libraries: libmpifort.so.12: cannot open shared object file: No such file
or directory

/THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh: line 118:
manpath: command not found"

appear when the task fails. This means that, as the lines say, the shared
libraries are not present on the computer you are connecting to via ssh
and/or the path has not been exported.



On Thu, Jun 14, 2018 at 3:37 PM, venkatesh chandragiri <
venkyphysicsi...@gmail.com> wrote:

>
> Dear wien2k users,
>
> I forgot to mention in my earlier mail that I have already have .ssh
> folder in the server where wien2k installed and already copied the key in
> the id_rsa.pub file to authorized_keys file . But I don't know why the
> second error (when I used .machines file without lapw0 line) came
>
>
>  error without including lapw0 line ==
>
>  LAPW0 END
>  read identiti failed: Success
>  ssh_exchange_identification: Connection closed by remote host^M
>  read identiti failed: Success
>  ssh_exchange_identification: Connection closed by remote host^M
>  read identiti failed: Success
> =
>
> [renwei@ln3 ~/.ssh]$ ls -la
> total 32
> drwx--  2 renwei renwei  4096 Jun 15 04:10 .
> drwx-- 44 renwei renwei  4096 Jun 15 04:10 ..
> -rw---  1 renwei renwei  1200 May 19 13:46 authorized_keys
> -rw---  1 renwei renwei  1675 May 17 13:44 id_rsa
> -rw-r--r--  1 renwei renwei   392 May 17 13:44 id_rsa.pub
> -rw-r--r--  1 renwei renwei 11641 Jun 14 10:45 known_hosts
>
> do I need to change the permission between the authorized_keys &
> id_rsa.pub ...?
>
> thanks
>
> venkatesh
>
> On Thu, Jun 14, 2018 at 11:49 PM, venkatesh chandragiri <
> venkyphysicsi...@gmail.com> wrote:
>
>> Dear wien2k users,
>>
>> Although, I have successfully completed "init_lapw". I got errors when
>> runned the .machines one with including the lapw0 line and the other
>> without including it.
>>
>> I have already kept the all the linking paths in .bashrc file which are
>> given below,
>>
>> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/opt/intel/comp
>> oser_xe_2013_sp1.3.174/mkl/lib/intel64
>> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/impi/5.0.2.044/
>> intel64/lib
>> 
>> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/home/renwei/venky/sof
>> t/libxc/lib
>> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/home/renwei/venky/sof
>> t/fftw/lib
>> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64
>> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/lib/intel64
>> source /THFS/opt/intel/composer_xe_2013_sp1.3.174/bin/compilervars.sh
>> intel64
>> source /THFS/opt/intel/composer_xe_2013_sp1.3.174/bin/ifortvars.sh
>> intel64
>> source /THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh
>> intel64
>> source /THFS/opt/intel/impi/5.0.2.044/intel64/bin/mpivars.sh
>> 
>> intel64
>>
>>  error with including lapw0 line ==
>>
>>
>>   **  LAPW1 crashed!
>>  1.909u 4.983s 0:12.42 55.3% 0+0k 0+4416io 0pf+0w
>>  error: command   /THFS/home/renwei/venky/soft/wien2k/lapw1cpara -up -c
>> uplapw1.def   failed
>>
>> =error==
>>  begin time is Thu Jun 14 09:25:19 CST 2018
>>
>>  /THFS/home/renwei/venky/soft/wien2k/lapw0_mpi: error while loading
>> shared libraries: libmpifort.so.12: cannot open shared object file: No such
>> file or directory
>>  /THFS/home/renwei/venky/soft/wien2k/lapw0_mpi: error while loading
>> shared libraries: libmpifort.so.12: cannot open shared object file: No such
>> file or directory
>>
>> /THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh: line 118:
>> manpath: command not found
>>
>> 
>>
>>
>>  error without including lapw0 line ==
>>
>>  LAPW0 END
>>  read identiti failed: Success
>>  ssh_exchange_identification: Connection closed by remote host^M
>>  read identiti failed: Success
>>  ssh_exchange_identification: Connection closed by remote host^M
>>  read identiti failed: Success
>> =
>>
>> Can someone help me to run the calculations and find out the reasons for
>> the above errors.
>>
>> thank you
>>
>> venkatesh
>>
>>
>



Re: [Wien] error in running .machines file

2018-06-14 Thread venkatesh chandragiri
Dear wien2k users,

I forgot to mention in my earlier mail that I have already have .ssh folder
in the server where wien2k installed and already copied the key in the
id_rsa.pub file to authorized_keys file . But I don't know why the second
error (when I used .machines file without lapw0 line) came


 error without including lapw0 line ==

 LAPW0 END
 read identiti failed: Success
 ssh_exchange_identification: Connection closed by remote host^M
 read identiti failed: Success
 ssh_exchange_identification: Connection closed by remote host^M
 read identiti failed: Success
=

[renwei@ln3 ~/.ssh]$ ls -la
total 32
drwx--  2 renwei renwei  4096 Jun 15 04:10 .
drwx-- 44 renwei renwei  4096 Jun 15 04:10 ..
-rw---  1 renwei renwei  1200 May 19 13:46 authorized_keys
-rw---  1 renwei renwei  1675 May 17 13:44 id_rsa
-rw-r--r--  1 renwei renwei   392 May 17 13:44 id_rsa.pub
-rw-r--r--  1 renwei renwei 11641 Jun 14 10:45 known_hosts

do I need to change the permission between the authorized_keys & id_rsa.pub
...?

thanks

venkatesh

On Thu, Jun 14, 2018 at 11:49 PM, venkatesh chandragiri <
venkyphysicsi...@gmail.com> wrote:

> Dear wien2k users,
>
> Although, I have successfully completed "init_lapw". I got errors when
> runned the .machines one with including the lapw0 line and the other
> without including it.
>
> I have already kept the all the linking paths in .bashrc file which are
> given below,
>
> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/opt/intel/
> composer_xe_2013_sp1.3.174/mkl/lib/intel64
> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/impi/5.0.2.
> 044/intel64/lib
> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/home/renwei/venky/
> soft/libxc/lib
> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/THFS/home/renwei/venky/
> soft/fftw/lib
> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64
> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/lib/intel64
> source /THFS/opt/intel/composer_xe_2013_sp1.3.174/bin/compilervars.sh
> intel64
> source /THFS/opt/intel/composer_xe_2013_sp1.3.174/bin/ifortvars.sh intel64
> source /THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh
> intel64
> source /THFS/opt/intel/impi/5.0.2.044/intel64/bin/mpivars.sh intel64
>
>  error with including lapw0 line ==
>
>
>   **  LAPW1 crashed!
>  1.909u 4.983s 0:12.42 55.3% 0+0k 0+4416io 0pf+0w
>  error: command   /THFS/home/renwei/venky/soft/wien2k/lapw1cpara -up -c
> uplapw1.def   failed
>
> =error==
>  begin time is Thu Jun 14 09:25:19 CST 2018
>
>  /THFS/home/renwei/venky/soft/wien2k/lapw0_mpi: error while loading
> shared libraries: libmpifort.so.12: cannot open shared object file: No such
> file or directory
>  /THFS/home/renwei/venky/soft/wien2k/lapw0_mpi: error while loading
> shared libraries: libmpifort.so.12: cannot open shared object file: No such
> file or directory
>
> /THFS/opt/intel/composer_xe_2013_sp1.3.174/mkl/bin/mklvars.sh: line 118:
> manpath: command not found
>
> 
>
>
>  error without including lapw0 line ==
>
>  LAPW0 END
>  read identiti failed: Success
>  ssh_exchange_identification: Connection closed by remote host^M
>  read identiti failed: Success
>  ssh_exchange_identification: Connection closed by remote host^M
>  read identiti failed: Success
> =
>
> Can someone help me to run the calculations and find out the reasons for
> the above errors.
>
> thank you
>
> venkatesh
>
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html