Thanks for your help.
Indeed, I could compute the 3 x 3 (x,y) plane using 4 x 4 x 6
(nxsh,nysh,nzsh) nearest neighbor cells in case.in5 (manual page 175).
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
I have tried to execute join_vectorfiles (source in SRC_trig) to
concatenate case.vector files from a parallel run, using the syntax as
given in the 'join_vectorfiles.f' fortran program:
'(x) join_vectorfiles [-up/-dn] case number_of_files'
This did not work, as the program tried to open
Dear all,
I am running parallel k-points over several nodes, which run fine (19.1
version).
However, I have defined local scratch directories on nodes to reduce
file transfers (defining the scratch variable in .bashrc), as recommended.
I see that .vector files accumulate there at each case
Sorry to be back with this again, but it seems that lines for the
corrected init_orb_lapw script:
set atoms=`grep RMT $file.struct|cut -c1-5`
set nat=$#atoms
will count two atoms instead of one, each time the atom symbol is only
one character long (because of space between atom and
# remember: the recommendation is to do this in steps in order to get
# more likely the groundstate for correlated compounds and usually you
# also want to see anyway what is the effect of SO and of U as compared
# to a plain PBE calculation
runsp_lapw (-p)
save_lapw PBE_no_so
runsp_lapw -so (-p)
Thanks for your very kind help: the whole process runs with no error
error now.
(I am still puzzled what the -orb and -so options in lapw1 are for: in
my mind, this should have called the lapwso -orb extra process that you
pointed).
I summarize correct steps for future readers:
initialize
When SO is activated, this is lapwso (and not lapw1) which adds the U-potential
to the Hamiltonian matrix (have a look at the file :log).
Thanks for rectifying (you probably mean 'adds SO'). I should have written for
the actual scheme
"lapw1 -up -band -orb -so" which may be, from U.G. program
Dear all,
I have been trying to perform U+SO comptations on Sr2RhO4 to get a
bandstructure.
My scheme is the following:
Initialize spin-polarized case
init_orb_lapw -orb
init_so_lapw
runsp_lapw -orb -so
define case.klist_band
lapw1 -up -band -orb
lapw1 -dn -band -orb
put Fermi energy in
While running "runsp_c_lapw -orb -so -p", I get a message error similar
to the one already observed in "runafm_lapw" (Wien2k 19.1):
LAPW0 END
ORB END
LAPW1 END
LAPWSO END
vresp: Undefined variable
Is this a problem similar to the one in :
I manually suppressed the extra lines in both files (and changed this
enormous U ...): cycles chain nicely now.
Thanks for your kind help.
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> And a U of 3 Ry ~ 40 eV forRh-d !!!
This is my fault: I used eV instead of Ry.
> 4 times the same atom
initso_lapw lists the 4 equivalent Rh atoms, with identical index.
Although I gave the l,U,J input for one Rh atom, it apparently considers
4 unequivalent ones ?
Dear all,
While running runsp_lapw -orb for GGA+U on Sr2RhO4, I get the following
error at the beginning of cycle 2:
"end-of-file during read, unit 10, file /... case.dmatup"
There is only one error file not empty, uporb.error with "Error in Vorb"
I have run also the same GGA+U on simple
>You could trying changing the top of line of the init_orb_lapw script
(in the $WIENROOT directory) from
#!/bin/csh -f
to
#!/bin/tcsh -f
It worked !
Thanks a lot.
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
With Wien2K 19.1 and Ubuntu : calling "init_orb_lapw -orb " from the
terminal window does list the atoms of the unit cell, and then calls for:
"Enter the name, l, U(Ry) and J of the Atoms (eg. Fe 2 0.3 0.0; exit
with RETURN)"
However, answering with a 4 data string separated with spaces as
I've just seen your remark on presentation, here it is for the last run:
The case is with 154 atoms (matrix-size 19000), without inversion
symmetry; 500 k-points. It is running on a single PC (Intel Xeon 5650
with 24 GB RAM and 2 cores for this run) and I use WIEN2k_19.1 with
ifort/mkl
What comes out as a surprise (for me), is that the memory needed for
lapw2s does not scale with the number of CPUs, while it does for lapw1s:
when I reduce the number of CPUs, lapw1s memory scales down
proportionaly, while total .vector files size is unchanged, and so there
is no improvement
Yes, I have shared memory. Swap on disk is disabled, so the system must
manage differently here.
I just wonder now: is there a way to estimate the memory needed for the
lapw2s, without running scf up to these ? Is this the total .vector size ?
___
I think it is unlikely related to a specific machine or OS problem: I
encountered the same situation with different machines types, different
OS (Redhat Sci. Linux, Ubuntu), different Intel compilers versions (from
2017 to 2019). But it could be some common configuration problem.
I don't use
So, my understanding of the situation is that lapw1 may create .vector
files that are larger than the amount of memory needed by the lapw1 step.
At the lapw2 step, the program must handle these files with less memory
than needed, hence these physical / cached unefficient readings.
This sounds
I was however intrigued by this heavy load of the system, with very
little CPU use (which means unefficient computation).
Actually, lapw2 routines are still blocked for some I/O most of the
time. This I/O is however no longer a physical reading from the hard
disk (and so does not show up in
I changed to Wien2k 19.1 version, latest Intel ifort compiler (2019.4)
and latest Ubuntu (18.04.2), shared memory, 4 CPUs, same case with 154
atoms :
- no more anomalous reading of the disk from parallel lapw2c
- the system is however slowed down a lot (as seen from switching
between
Finally w2web does create and show .inM file: apparently, this should
not be tried on too simple, symmetric cases with no force, which will
not provide this file.
Why my old case first showed .inm instead of .inM will remain a mystery.
Thanks
___
Wien
Yes, but why is case.inM created empty this time ?
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:
With Wien2k 19.1 version, latest Intel ifort compiler (2019.4) and
latest Ubuntu (18.04.2):
with an old case from 18.2 version, I initialized again the case (using
w2web) and, using mini.position from the w2web interface, tried to
vizualize the case.inm file from the w2web link. This provided
The Intel 2017/2 version of the compiler did not help.
To fix the ideas, lapw2c has read ~ 800 GiB during 13 hours, which makes
at 20 MB/s, 11 hours spent reading disk.
The lapw2c routine will finally end, without any error, after it has
read Tbites of data from the disk. We could not detect
Finally, I have the same problem with Intel 2019.4.243 compiler.
My cases are 50 and 150 atoms.
I will try the 2017.2 / Intel-tested Linux version.
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
I am presently trying 2019.4.243 Intel version on the machines that were
problematic with 2018 version. Up to now, I haven't encountered problems.
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
The cumulated amount of disk reading per process may be obtained using
the 'system monitor' / 'processes' tool (provided in Sci. Linux in my case).
Here, this is not a swap problem: I always disable swap, as large memory
overflow would almost freeze the system - I much prefer that it crashes.
Indeed, I am using ifort 208.3.222 with the options:
-O2 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume
buffered_io -I$(MKLROOT)/include
I will try to change the options / update ifort to a newer version.
___
Wien mailing list
I am facing a problem with lapw2c on a machine running the 18.2 version
of Wien2k. I suspect this is a machine problem, rather than a Wien2k
one, but would like to be sure:
As lapw2c runs in parallel in a cycle, the lapw2c processes will all try
to read a very large amount of data from the
I finally erased all files in the old Wienroot directory, installed the
new version from it, using the old options for the compiler, and it works.
Apparently, only part of the options where copied when the SRC folders
were replaced in the first procedure. Some others, related to the latest
I will do this.
The reason why I replaced the folders and tried to recompile the whole
thing, is that I thought this was equivalent to a new installation,
which is the recommended way of the instructions.
Should a genuine new installation then consist in creating a new folder
for the new
To update the version, I just unzipped the new folders in the old
directory for SRC folders, to replace them.
The site configuration apparently picked up the old compiler options, so
I did not change them:
-O2 -FR -mpl -w -prec_div -pc80 -pad -ip -DINTEL-VML -traceback -assume
buffered_io
Hi,
I tried to compile the new version 18.2 to update my old 17 one, and I
encountered two errors (I use latest Intel fortran compiler from
Parallel Studio):
- In SRC_mixer: for main.c: undefined reference to 'MAIN__'
and then multiple 'undefined reference'
- In SRC_nmr: the same errors.
It has been said many times that forces are not correct when spin-orbit
is included.
Also, that the contribution of SO should generally be small, compared to
energies that determine the equilibrium positions, so that minimization
without SO should be a good approximation, for most cases.
I
Hello,
If I:
- extract all modified SRC_XX.tar.gz files in WIENROOT directory,
- run siteconfig_lapw (from any location) and select Update option,
should this automatically update and compile all new SRC_XX folders, and
nothing more is needed ?
Thanks
Thanks to clarify my poor knowledge on clusters !
Is it enough if these directories hold the case directories for Wien2k,
or is other data needed ?
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
There is indeed too few information from me.
I try to configure the k-paralellization over several PCs connected by a
regular (slow) network. I use w2web, so that commands are transparent to me.
Wien2k is installed on all machines, and paths are the same on all
machines. SSH works without
Dear users,
I failed configuring the parallel options to run cases on several
machines, each of them with several CPUs, driven by ssh protocol.
* Configuring the parallel options with: shared memory, MPI = 0, ssh
protocol, allows to run parallel jobs using several CPUs on the same
machine.
Dear users,
I am trying to plot the case.r2v density, using the w2web interface.
It seems that the "x lapw5" button of the electron-density task does
more than simply execute lapw5, as the lapw5.def file (supposed to be an
input) will be changed back after this to the default options (so, it
Dear all,
Xcrysden works fine from the w2web interface (run locally), when viewing
a structure. So, I think it is correctly installed and configured.
However, it displays "requires X-Windows system ... Calc" when trying to
use the "Calculate density with Xcrysden" button, or "requires
Hi,
While running a case, I have noticed that the lapw1/2.error file is not
empty, and contains 'Error in lapw1/2'.
However, this does NOT stop the case.
Could this be related to the parallel mode that I use ?
Indeed, I have noticed that this did not happen in the preceding similar
case
42 matches
Mail list logo