Dear Wien2k mailing list,
after upgrade to Wien2k_14 I'm seeing this warning:
FCORE for atom 1 not converged at RMT. Estimated inaccuracy: 0.2417
mRy/bohr
This is a monoclinic HfO2 cell, atom 1 is Hf atom (struct file
attached). I'm using the mBJ potential and I don't see this warning
Dear Wien2k mailing list,
I have a problem with crash in parallel lapw1. It crash with SECLIT -
Error in Cholesky output in stderr. Looking at tail of corresponding
case.output1_2 I see:
Time for los (hamilt, cpu/wall) : 0.8 5.6
Time for alm (hns) : 4.2
On Thu, 2014-10-09 at 06:23 -0500, Laurence Marks wrote:
Why are you using P1? You have made everything much slower and less
efficient.
Beyond this it is hard to guess.
Well, P1 is what I get during the initialization with sgrup.
In the meantime I managed to get it running by removing -it
meant increase emax to 2.5?
Anyway this is really helpful, thank you very much.
On Thu, Oct 9, 2014 at 7:01 AM, Pavel Ondracka
pavel.ondra...@email.cz wrote:
On Thu, 2014-10-09 at 06:23 -0500, Laurence Marks wrote:
Why are you using P1? You have made everything much slower
Dear Wien2k mailing list,
I have recently noticed a lot of messages like this being printed by
lapw0 -p in my case.dayfiles:
int:rho,tauw,grho,g2rho 4.554539522661150E-002 0.120617172558130
0.148237064127382 0.224103059146677 tauwrong=
0.117956979950693
This is with mBJ,
Dear Mamta,
compiler errors
- Logicals at (1) must be compared with .eqv. instead of .eq.
- Nonnegative width required in format string at (1)
are some known problems in the old Wien2k versions mentioned previously:
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg11247.html
On my
On Mon, 2015-02-09 at 01:36 -0800, Yundi Quan wrote:
Hi,
Is there a way of reducing the memory usage? I have a case with 72
atom per unit cell. And my cluster has 8 GB member per node and each
node has 8 cores. I submitted the job and got the error message saying
that insufficient virtual
On Mon, 2015-02-09 at 11:33 +0100, Pavel Ondracka wrote:
On Mon, 2015-02-09 at 01:36 -0800, Yundi Quan wrote:
Hi,
Is there a way of reducing the memory usage? I have a case with 72
atom per unit cell. And my cluster has 8 GB member per node and each
node has 8 cores. I submitted the job
Dear Wien2k mailing list,
for some of my cases filtvec was mysteriously failing to open vector
files even when the vector files were completely valid. It turns out the
max length for file names in filtvec is 80 characters so if your CASE
directory is deep enough in in directory structure, it
On Tue, 2015-03-03 at 17:09 +0900, Paul Fons wrote:
Hi I did run init_hf_lapw and I saw no sign of errors upon running it,
however, I ran it from the web interface the first time. I just
restarted the calculation using bash and have confirmed the ibz and
fbz files exist.
The reason that I am
On 02/19/2015 01:33 PM, Pavel Ondracka wrote:
Dear Wien2k mailing list,
I would like to estimate electron localization by computing
the inverse
participation ratio. For that I need to get the Mulliken
point charges
On Mon, 2015-04-27 at 13:19 +0530, saurabh samant wrote:
Dear WIEN2k users,
I have done a spin-polarized GGA+SO+U calculation for an AB2S4
compound. The fermi level at the dosplot is at the top of valence band
but in the bandstructureplot the fermi level is at the middle of the
valence
On Wed, 2015-11-11 at 08:59 +0100, Peter Blaha wrote:
> I'm not sure, why you would not set this permanently in .bashrc.
I wanted to be able to adjust the variables on per task/job basis (when
running multiple cases in parallel).
> Anyway, if ssh does not allow it on your system, you could
Dear Wien2k mailing list,
I'm having some troubles passing environmental variables (eg.
OMP_NUM_THREADS or similar) to lapw1 and lapw2. This works in serial
mode where the lapw* programs are called directly, however in parallel
mode they are run through remote shell and all environmental info is
14 matches
Mail list logo