Re: [Wien] parallel ssh error

2019-09-27 Thread Indranil mal
Respected Sir, In my linux(Ubuntu 18.04 LTS) in ssh_config, and in
sshd_config there are two line already "SendEnv LANG LC_*" "AcceptEnv LANG
LC_*" respectively. However, ssh vsli1 'echo $WIENROOT' gives nothing
(blank).   The command ssh vsli1 'pwd $WIENROOT' print "/home/vlsi" the
common home directory and
ssh vlsi1 "env"
SSH_CONNECTION=172.27.46.251 44138 172.27.46.233 22
LANG=en_IN
XDG_SESSION_ID=47
USER=niel
PWD=/home/niel
HOME=/home/niel
SSH_CLIENT=172.27.46.251 44138 22
LC_NUMERIC=POSIX
MAIL=/var/mail/niel
SHELL=/bin/bash
SHLVL=1
LANGUAGE=en_IN:en
LOGNAME=niel
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
XDG_RUNTIME_DIR=/run/user/1000
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
_=/usr/bin/env

this is similar as server, and other nodes.


Sir After changing the parallel option file in $WIENROOT in server to

setenv TASKSET *"yes" from "no"*
if ( ! $?USE_REMOTE ) setenv USE_REMOTE 1
if ( ! $?MPI_REMOTE ) setenv MPI_REMOTE 1
setenv WIEN_GRANULARITY 1
setenv DELAY 0.1
setenv SLEEPY 1
setenv WIEN_MPIRUN "mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_"
setenv CORES_PER_NODE 1

the error is not coming but the program is not increasing steps after lapw0
it stuck in lapw1


what should be the parallel option file in server and all client node?



On Fri, Sep 27, 2019 at 12:05 PM Peter Blaha 
wrote:

> Ok. So the problem seems to be that in your linux the ssh does not
> send/accept the "environment".
>
> What do you get with:
>
> ssh vsli2 'echo $WIENROOT'
>
> If you have root permissions, I suggest to do the following:
>
> At least on my Linux (Suse) there is a  /etc/ssh   directory, with files
>
> ssh_config and sshd_config.
>
> Edit these files and add lines:
> SendEnv *  # in ssh_config
> AcceptEnv *# in sshd_config
>
>
>
> On 9/27/19 11:20 AM, Indranil mal wrote:
> > Respected Sir, As per Your suggestion I have done the single process
> > with one iteration successfully encountered no issue in all the nodes.
> > However in parallel running facing the same  error
> >
> > grep: *scf1*: No such file or directory
> > cp: cannot stat '.in.tmp': No such file or directory
> > FERMI - Error
> > grep: *scf1*: No such file or directory
> > Parallel.scf1_1: No such file or directory.
> > bash: fixerror_lapw: command not found
> > bash: lapw1c: command not found
> > bash: fixerror_lapw: command not found
> > bash: lapw1c: command not found
> > bash: fixerror_lapw: command not found
> > bash: lapw1c: command not found
> > bash: fixerror_lapw: command not found
> > bash: lapw1c: command not found
> > bash: fixerror_lapw: command not found
> > bash: lapw1c: command not found
> > bash: fixerror_lapw: command not found
> > bash: lapw1c: command not found
> > bash: fixerror_lapw: command not found
> > bash: lapw1c: command not found
> > bash: fixerror_lapw: command not found
> > bash: lapw1c: command not found
> >   LAPW0 END
> > hup: Command not found.
> >
> > Previously I was doing a mistake with user name and home directory now
> in all the pc the user name and the home directory is same (/home/vlsi) is
> same and the working directory is accessible from every node.
> >
> >   (ls -l $WIENROOT/lapw1c
> > -rwxr-xr-x 1 vlsi vlsi 2151824 Sep 26 02:41 /servernode01/lapw1c) this
> reflects in all the pcs.
> >
> >
> >
> >
> > On Thu, Sep 26, 2019 at 1:27 PM Peter Blaha
> > mailto:pbl...@theochem.tuwien.ac.at>>
> wrote:
> >
> > First of all, one of the errors was: lapw1c: command not found
> >
> > You showed us only the existence of "lapw1", not "lapw1c" with the ls
> > commands.
> >
> > However, since you also have:  fixerror_lapw: command not found
> >
> > I don't think that this is the problem.
> >
> > -
> > I'm more concerned about the different usernames/owners of lapw1 on
> > different computers.
> > It is not important who owns $WIENROOT/*, as long as everybody has
> r-x
> > permissions.
> >
> > However, what is your username and your home-directory on the
> different
> > machines ? It must be the same ! And do you have access to the actual
> > working directory ?
> > In what directory did you start the calculations?
> > Is it a directory called "Parallel" ? What is the full path of that
> on
> > every computer (/casenode1/Parallel ?)
> > --
> >
> > First check would be:
> >
> > On vlsi1 change into the working directory (Parallel ?) and run one
> > iteration without parallelization:   run -i 1
> >
> > then login to   ssh vsli2 (passwordless), cd into "Parallel" and do
> > another non-parallel cycle.  Does it work ?
> > ---
> >
> >
> > On 9/26/19 11:48 AM, Indranil mal wrote:
> >  > Dear developers and users
> >  >  I have 5 individual Linux
> >  > (Ubuntu) pc with intel i7 octa core processors and 16GB RAM in
> each
> >  > connected via a 1GBps LAN.  password less 

Re: [Wien] Using fold2Bloch for the path perpendicular to G-K

2019-09-27 Thread Артем Тарасов
Dear Oleg,
thank you for your help!

Your advice was very useful for me and now my problem is resolved.
Over time, I would like to realize an automation of individual selecting the 
divider for different sectors of the k-path. 
For example, it will be useful for path consisting points Gamma and M 
simultaneously (in other case "quality" of GM may be worse than GK). 
I think, the procedure should be similar to analogous function in xcrysden.

Best regards,
Artem.

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] impi memory leak

2019-09-27 Thread Laurence Marks
For reference, the impi memory leak which has been present for some years
has now been repaired according to my tests, using the 2019 Update 5
version (it may have already have been repaired with Update 4).

-- 
Professor Laurence Marks
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu
Corrosion in 4D: www.numis.northwestern.edu/MURI
Co-Editor, Acta Cryst A
"Research is to see what everybody else has seen, and to think what nobody
else has thought"
Albert Szent-Gyorgi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] parallel ssh error

2019-09-27 Thread Peter Blaha
Ok. So the problem seems to be that in your linux the ssh does not 
send/accept the "environment".


What do you get with:

ssh vsli2 'echo $WIENROOT'

If you have root permissions, I suggest to do the following:

At least on my Linux (Suse) there is a  /etc/ssh   directory, with files

ssh_config and sshd_config.

Edit these files and add lines:
SendEnv *  # in ssh_config
AcceptEnv *# in sshd_config



On 9/27/19 11:20 AM, Indranil mal wrote:
Respected Sir, As per Your suggestion I have done the single process 
with one iteration successfully encountered no issue in all the nodes. 
However in parallel running facing the same  error


grep: *scf1*: No such file or directory
cp: cannot stat '.in.tmp': No such file or directory
FERMI - Error
grep: *scf1*: No such file or directory
Parallel.scf1_1: No such file or directory.
bash: fixerror_lapw: command not found
bash: lapw1c: command not found
bash: fixerror_lapw: command not found
bash: lapw1c: command not found
bash: fixerror_lapw: command not found
bash: lapw1c: command not found
bash: fixerror_lapw: command not found
bash: lapw1c: command not found
bash: fixerror_lapw: command not found
bash: lapw1c: command not found
bash: fixerror_lapw: command not found
bash: lapw1c: command not found
bash: fixerror_lapw: command not found
bash: lapw1c: command not found
bash: fixerror_lapw: command not found
bash: lapw1c: command not found
  LAPW0 END
hup: Command not found.

Previously I was doing a mistake with user name and home directory now in all 
the pc the user name and the home directory is same (/home/vlsi) is same and 
the working directory is accessible from every node.

  (ls -l $WIENROOT/lapw1c
-rwxr-xr-x 1 vlsi vlsi 2151824 Sep 26 02:41 /servernode01/lapw1c) this reflects 
in all the pcs.




On Thu, Sep 26, 2019 at 1:27 PM Peter Blaha 
mailto:pbl...@theochem.tuwien.ac.at>> wrote:


First of all, one of the errors was: lapw1c: command not found

You showed us only the existence of "lapw1", not "lapw1c" with the ls
commands.

However, since you also have:  fixerror_lapw: command not found

I don't think that this is the problem.

-
I'm more concerned about the different usernames/owners of lapw1 on
different computers.
It is not important who owns $WIENROOT/*, as long as everybody has r-x
permissions.

However, what is your username and your home-directory on the different
machines ? It must be the same ! And do you have access to the actual
working directory ?
In what directory did you start the calculations?
Is it a directory called "Parallel" ? What is the full path of that on
every computer (/casenode1/Parallel ?)
--

First check would be:

On vlsi1 change into the working directory (Parallel ?) and run one
iteration without parallelization:   run -i 1

then login to   ssh vsli2 (passwordless), cd into "Parallel" and do
another non-parallel cycle.  Does it work ?
---


On 9/26/19 11:48 AM, Indranil mal wrote:
 > Dear developers and users
 >      I have 5 individual Linux
 > (Ubuntu) pc with intel i7 octa core processors and 16GB RAM in each
 > connected via a 1GBps LAN.  password less ssh working properly. I
have
 > installed WIEN2K 19 in the one machine (M1 server) in the directory
 > "/servernode1" and the case directory is "/casenode1"  and
through NFS I
 > have mounted the "servernode1", and "casenode1" in other four pcs
with
 > same name local folders ("servernode1", and "casenode1") in them.
I have
 > installed ,intel compilers, libxc, fftw, elpa in all the nodes
 > individually. I have manually edited the bash file  $WIENROOT
path and
 > case directory and the WIEN2K options file. Keep all the value
same in
 > all the client nodes as it is in the server node.
 >
 > alias cdw="cd /casenode1"
 > export OMP_NUM_THREADS=4
 > #export LD_LIBRARY_PATH=.
 > export EDITOR="emacs"
 > export SCRATCH=./
 > export WIENROOT=/servernode1
 > export W2WEB_CASE_BASEDIR=/casenode1
 > export STRUCTEDIT_PATH=$WIENROOT/SRC_structeditor/bin
 >
 > Now when I am doing parallel calculations with all the client
nodes in
 > machine file ,
 > # k-points are left, they will be distributed to the
residual-machine_name.
 > #
 > 1:vlsi1
 > 1:vlsi2
 > 1:vlsi3
 > 1:vlsi4
 >
 > granularity:1
 > extrafine:1
 > #
 >
 >
 > and getting the following error
 >
 > grep: *scf1*: No such file or directory
 > cp: cannot stat '.in.tmp': No such file or directory
 > FERMI - Error
 > grep: *scf1*: No such file or directory
 > Parallel.scf1_1: No such file or directory.
 > bash: fixerror_lapw: command not found
 > bash: lapw1c: command not found
 > bash: fixerror_lapw: command not found
 >