Good to see. As you know, "hup: Command not found" can be ignored:
http://zeus.theochem.tuwien.ac.at/pipermail/wien/2011-April/014484.html
http://zeus.theochem.tuwien.ac.at/pipermail/wien/2010-September/013598.html
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg11131.html
On
Thank you Sir for your instantaneous support. Now it is working smoothly
only with hup: Command not found.
On Sun, Sep 29, 2019 at 6:32 PM Gavin Abo wrote:
> Checking with "which lapw1c" on each node (vlsi1, vlsi2, vlsi3, and vlsi4)
> is a good idea. However, since WIENROOT is (blank) [1],
An additional comment, /home/username/WIEN2k (or ~/WIEN2k) is where I
have WIEN2k installed. Whereas, you have installed WIEN2k at
/servernode1 [1]. In the examples of my previous posts (e.g. [2]) you
might find some typographical errors were I forget to replace my
/home/username/WIEN2k with
So there is progress as now the environment seems to be accepted in the
remote shell.
lapw1para (called by x_lapw, which is called by run_lapw -p) creates the
splitted klists-files (case.klist_1,...) and def files lapw1_1.def,...
It uses the $cwd variable and executes basically:
ssh vlsi1
Checking with "which lapw1c" on each node (vlsi1, vlsi2, vlsi3, and
vlsi4) is a good idea. However, since WIENROOT is (blank) [1], it
probably won't work until that is resolved.
It was mentioned that the WIEN2k .bashrc block was setup on each node by
running userconfig [2]. So it definitely
What does
ssh vlsi1 which lapw1c
give, what does "cat *.error" give in the case directory?
_
Professor Laurence Marks
"Research is to see what everybody else has seen, and to think what nobody
else has thought", Albert Szent-Gyorgi
www.numis.northwestern.edu
On Sun, Sep 29, 2019, 01:17
I had noticed that ssh vlsi1 'echo $WIENROOT/lapw*' seems to pick up the
local environment. Since you are interested in the remote environment,
make sure you issue them as separate commands [1] for vlsi1, vlsi2,
vlsi3, and vlsi4:
ssh vlsi1
echo $WIENROOT/lapw*
exit
ssh vlsi4
echo
Now echo $WIENROOT is giving the $WIENROOT location.
echo $WIENROOT/lapw*
/home/username/WIEN2K/lapw0 /home/username/WIEN2K/lapw0_mpi
/home/username/WIEN2K/lapw0para /home/username/WIEN2K/lapw0para_lapw
/home/username/WIEN2K/lapw1 /home/username/WIEN2K/lapw1c
/home/username/WIEN2K/lapw1c_mpi
The "sudo service sshd restart" step, which I forgot to copy and paste,
that is missing is corrected below.
On 9/28/2019 12:18 PM, Gavin Abo wrote:
After you set both "SendEnv *" and "AcceptEnv *", did you restart the
sshd service [1]? The following illustrates steps that might help you
After you set both "SendEnv *" and "AcceptEnv *", did you restart the
sshd service [1]? The following illustrates steps that might help you
verify that WIENROOT appears on a remote vlsi node:
username@computername:~$ echo $WIENROOT
username@computername:~$ export WIENROOT=/servernode1
Sir I have tried with " SetEnv * " Still nothing is coming with echo
commad and user name by mistake I posted wrong Otherwise no issue with
user name and I have set the parallel options file taksset "no" and remote
options are 1 1 in server and client machines.
On Sat, 28 Sep 2019 11:36 Gavin
Respected Sir, In my linux(Ubuntu 18.04 LTS) in ssh_config, and in
sshd_config there are two line already "SendEnv LANG LC_*" "AcceptEnv
LANG LC_*" respectively.
The "LANG LC_*" probably only puts just the local language variables in
the remote environment. Did you follow the previous advice
Respected Sir, In my linux(Ubuntu 18.04 LTS) in ssh_config, and in
sshd_config there are two line already "SendEnv LANG LC_*" "AcceptEnv LANG
LC_*" respectively. However, ssh vsli1 'echo $WIENROOT' gives nothing
(blank). The command ssh vsli1 'pwd $WIENROOT' print "/home/vlsi" the
common home
Ok. So the problem seems to be that in your linux the ssh does not
send/accept the "environment".
What do you get with:
ssh vsli2 'echo $WIENROOT'
If you have root permissions, I suggest to do the following:
At least on my Linux (Suse) there is a /etc/ssh directory, with files
ssh_config
Respected Sir, As per Your suggestion I have done the single process with
one iteration successfully encountered no issue in all the nodes. However
in parallel running facing the same error
grep: *scf1*: No such file or directory
cp: cannot stat '.in.tmp': No such file or directory
FERMI - Error
First of all, one of the errors was: lapw1c: command not found
You showed us only the existence of "lapw1", not "lapw1c" with the ls
commands.
However, since you also have: fixerror_lapw: command not found
I don't think that this is the problem.
-
I'm more concerned about the
Dear developers and users
I have 5 individual Linux (Ubuntu)
pc with intel i7 octa core processors and 16GB RAM in each connected via a
1GBps LAN. password less ssh working properly. I have installed WIEN2K 19
in the one machine (M1 server) in the directory
17 matches
Mail list logo