Re: [Wien] hf error -monolayer

2023-06-21 Thread Peter Blaha

Is your case.inhf correct ?

What is the content of   hf.error  ??


Am 21.06.2023 um 13:47 schrieb Brik Hamida:

Dear

I have done hf-SFC calculation for BULK semiconductor  and it is well 
done.
Then , I followed the same hf -calculation steps for MONOLAYER 
semiconductor but unfortunately  I have an error in hf ( hf error file) :


  start (21 جوان, 2023 CET 12:28:26 م) with lapw0 (40/99 to go)

 cycle 1(21 جوان, 2023 CET 12:28:26 م) (40/99 to go)

>   lapw0 -grr   (12:28:26) 6.3u 0.0s 0:06.38 99.8% 0+0k 0+1io 0pf+0w
>   lapw0(12:28:32) 4.5u 0.0s 0:04.61 100.0% 0+0k 0+2704io 0pf+0w
>   lapw1(12:28:37) 41.2u 0.5s 0:41.74 100.0% 0+0k 0+72976io 0pf+0w
>   lapw2(12:29:19) 1.1u 0.0s 0:01.16 100.0% 0+0k 0+3864io 0pf+0w
>   lcore(12:29:20) 0.0u 0.0s 0:00.05 100.0% 0+0k 0+2984io 0pf+0w
>   hf   -mode1   -redklist  (12:29:20) 0.0u 0.0s 0:00.09 88.8% 0+0k 
0+24io 0pf+0w
error: command   /home/hmd/wien18/hf hf.def   failed

>   stop error

Please can someone tell me how I can solve  this problem? Thanks in 
advance .

Best regards





___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST 
at:http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


--
---
Peter Blaha,  Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-158801165300
Email:peter.bl...@tuwien.ac.at   
WWW:http://www.imc.tuwien.ac.at   WIEN2k:http://www.wien2k.at

-
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] ** testerror: Error in Parallel LAPW

2023-06-21 Thread Peter Blaha

The example you showed us, was a k-parallel job on only one node.

To fix this, just set USE_REMOTE to zero (either in $WIENROOT 
permanently, or temporarily in your submitted job script.



Another test would be to make a new wien2k-installation using 
"ifort+slurm" in siteconfig. It may work out of the box, in particular 
when using mpi-parallel. It uses   srun,  but I'm not sure if all 
slum-configurations are identical to your cluster.



Am 21.06.2023 um 22:58 schrieb Ilias Miroslav, doc. RNDr., PhD.:

Dear all,

ad: 
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg22588.html 

" In order to use multiple nodes, you need to be able to do 
passwordless ssh to the allocated nodes (or any other command 
substituting ssh). "


According to our cluster admin, one can use (maybe) 'srun' to allocate 
and connect to a batch node. 
https://hpc.gsi.de/virgo/slurm/resource_allocation.html


Would  it possible to use  "srun" within Wien2k scripts to run 
parallel jobs please ?  We are using common disk space on that cluster.


Best, Miro

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST 
at:http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


--
---
Peter Blaha,  Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-158801165300
Email:peter.bl...@tuwien.ac.at   
WWW:http://www.imc.tuwien.ac.at   WIEN2k:http://www.wien2k.at

-
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] ** testerror: Error in Parallel LAPW

2023-06-21 Thread Laurence Marks
With apologies to Lukasz and Miro, there are some inaccurate statements
being made about how to use Wien2k in parallel -- the code is more complex
(and smarter). Please read carefully section 5.5 in detail, the read it
again. Google what commands such as ssh, srun, rsh, mpirun do.

If your cluster does not have ssh to the slurm allocated nodes, then try
and get your admin to read section 5.5. There are ways to get around
forbidden ssh, but you need to understand computers first.

---
Professor Laurence Marks (Laurie)
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu
"Research is to see what everybody else has seen, and to think what nobody
else has thought" Albert Szent-Györgyi

On Thu, Jun 22, 2023, 00:23 pluto via Wien 
wrote:

> Dear Miro,
>
> On my cluster it works by a command
>
> salloc -p cluster_name -N6 sleep infinity &
>
> This particular command allocates 6 nodes. You can find which ones by
> squeue command. Then passworless to these nodes is allowed in my
> cluster. Then in .machines I include the names of these nodes and things
> work.
>
> But there is a big chance that this is blocked in your cluster, you need
> to ask your administrator.
>
> I think srun is the required command within the slurm shell script. You
> should get some example shell scripts from your administrator or
> colleagues who use the cluster.
>
> As I mentioned in my earlier email, Prof. Blaha provides workarounds for
> slurm. If simple ways are blocked, you will just need to implement these
> workarounds. It might not be easy, but setting up cluster calculations
> is not supposed to be easy.
>
> Best,
> Lukasz
>
>
>
>
> On 2023-06-21 22:58, Ilias Miroslav, doc. RNDr., PhD. wrote:
> > Dear all,
> >
> >  ad:
> >
> https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg22588.html
> > [1]
> >
> >  " In order to use multiple nodes, you need to be able to do
> > passwordless ssh to the allocated nodes (or any other command
> > substituting ssh). "
> >
> >  According to our cluster admin, one  can use (maybe) 'srun' to
> > allocate and connect to a batch node.
> > https://hpc.gsi.de/virgo/slurm/resource_allocation.html [2]
> >
> >  Would  it possible to use  "srun" within Wien2k scripts to run
> > parallel jobs please ?  We are using common disk space on that
> > cluster.
> >
> >  Best, Miro
> >
> >
> > Links:
> > --
> > [1]
> >
> https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg22588.html
> > [2] https://hpc.gsi.de/virgo/slurm/resource_allocation.html
> > ___
> > Wien mailing list
> > Wien@zeus.theochem.tuwien.ac.at
> > http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> > SEARCH the MAILING-LIST at:
> > http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] ** testerror: Error in Parallel LAPW

2023-06-21 Thread pluto via Wien

Dear Miro,

On my cluster it works by a command

salloc -p cluster_name -N6 sleep infinity &

This particular command allocates 6 nodes. You can find which ones by 
squeue command. Then passworless to these nodes is allowed in my 
cluster. Then in .machines I include the names of these nodes and things 
work.


But there is a big chance that this is blocked in your cluster, you need 
to ask your administrator.


I think srun is the required command within the slurm shell script. You 
should get some example shell scripts from your administrator or 
colleagues who use the cluster.


As I mentioned in my earlier email, Prof. Blaha provides workarounds for 
slurm. If simple ways are blocked, you will just need to implement these 
workarounds. It might not be easy, but setting up cluster calculations 
is not supposed to be easy.


Best,
Lukasz




On 2023-06-21 22:58, Ilias Miroslav, doc. RNDr., PhD. wrote:

Dear all,

 ad:
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg22588.html
[1]

 " In order to use multiple nodes, you need to be able to do
passwordless ssh to the allocated nodes (or any other command
substituting ssh). "

 According to our cluster admin, one  can use (maybe) 'srun' to
allocate and connect to a batch node.
https://hpc.gsi.de/virgo/slurm/resource_allocation.html [2]

 Would  it possible to use  "srun" within Wien2k scripts to run
parallel jobs please ?  We are using common disk space on that
cluster.

 Best, Miro


Links:
--
[1] 
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg22588.html

[2] https://hpc.gsi.de/virgo/slurm/resource_allocation.html
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] ** testerror: Error in Parallel LAPW

2023-06-21 Thread Ilias Miroslav, doc. RNDr., PhD.
Dear all,

ad: https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg22588.html
" In order to use multiple nodes, you need to be able to do passwordless ssh to 
the allocated nodes (or any other command substituting ssh). "

According to our cluster admin, one  can use (maybe) 'srun' to allocate and 
connect to a batch node. https://hpc.gsi.de/virgo/slurm/resource_allocation.html

Would  it possible to use  "srun" within Wien2k scripts to run parallel jobs 
please ?  We are using common disk space on that cluster.

Best, Miro
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] hf error -monolayer

2023-06-21 Thread Brik Hamida
I'm sorry I made a typo in the last message .
I mean the command : run_kgenhf_lapw -redklist and not run_kgenhf_lapw
-hf  -redklist
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] hf error -monolayer

2023-06-21 Thread Brik Hamida
Dear Fabien
Thank you for your reply.
Indeed the two files are generared.
I executed run_kgenhf_lapw -hf  -redklist with  :  k-mesh (eg. 4x4x4) and
commensurate reduced k-mesh (eg. 2x2x2).

*case.klist :*
 1 0 0 0 4  1.0 -7.0  1.5 0
k, div: (  4  4  4)
 2 0 0 1 4  2.0
 3 0 0 2 4  1.0
 4 0 1 0 4  6.0
 5 0 1 1 4 12.0
 6 0 1 2 4  6.0
 7 0 2 0 4  3.0
 8 0 2 1 4  6.0
 9 0 2 2 4  3.0
10 1 1 0 4  6.0
11 1 1 1 4 12.0
12 1 1 2 4  6.0
END

*case.kgen:*
1272  2.60416667E-03   101 1
 4 1 2 4 4 4
 1 2 4 5 4 1
 2 5 5 8 1 4
 4 5 4 1 4 5
 5 4 2 3 5 5
 4 2 3 5 6 4
 2 3 6 6 4 2
 4 4 5 8 2 4
 5 5 8 2 5 5
 6 4 2 5 6 6
 4 3 5 5 6 8
 3 5 6 6 8 4
 4 510 4 4 4
 511 4 4 410
11 4 4 5 510
 8 4 5 511 8
 4 5 710 4 4
 5 711 4 4 5
 810 8 4 5 8
11 4 4 51011
 8 4 7 810 4
 4 7 811 8 4
 71011 4 4 8
1011 8 5 5 6
11 4 5 5 612
 4 5 51011 4
 5 51112 4 5
 6 611 8 5 6
 612 8 5 6 8
11 4 5 6 812
 4 5 6 911 8
 5 6 912 4 5
 61112 4 5 7
 810 8 5 7 8
11 4 5 71011
 8 5 8 911 4
 5 8 912 8 5
 81011 8 5 8
1112 4 5 911
12 4 6 61112
 4 6 8 911 8
 6 8 912 4 6
 81112 8 6 9
1112 4 7 810
10 4 7 81011
 4 7 81111 8
 7101011 4 7
101111 4 8 9
1111 4 8 911
12 4 8 91212
 4 8101011 8
 8101111 8 8
111112 4 811
1212 4 91111
12 8 9111212
 410101011 4
10101111 410
111111 41111
1112 4111112
12 411121212
 0 0 0 0 0 0
 0 0 0 0 0 0
 0 0 0 0 0 0
 0 0 0 0 0 0
 0 0 0 0 0 0
 0 0 0

Re: [Wien] hf error -monolayer

2023-06-21 Thread fabien . tran
Not enough information is provided. In particular, were the various 
files case.klist* and case.kgen* properly generated with the command 
"run_kgenhf_lapw -redklist"?



On 21.06.2023 13:47, Brik Hamida wrote:

Dear

I have done hf-SFC calculation for BULK semiconductor  and it is well
done.
Then , I followed the same hf -calculation steps for MONOLAYER
semiconductor but unfortunately  I have an error in hf ( hf error
file) :

 start (21 جوان, 2023 CET 12:28:26 م) with lapw0
(40/99 to go)

cycle 1 (21 جوان, 2023 CET 12:28:26 م)
(40/99 to go)


  lapw0 -grr (12:28:26) 6.3u 0.0s 0:06.38 99.8% 0+0k 0+1io

0pf+0w

  lapw0  (12:28:32) 4.5u 0.0s 0:04.61 100.0% 0+0k 0+2704io

0pf+0w

  lapw1  (12:28:37) 41.2u 0.5s 0:41.74 100.0% 0+0k 0+72976io

0pf+0w

  lapw2  (12:29:19) 1.1u 0.0s 0:01.16 100.0% 0+0k 0+3864io

0pf+0w

  lcore(12:29:20) 0.0u 0.0s 0:00.05 100.0% 0+0k 0+2984io 0pf+0w
  hf   -mode1   -redklist (12:29:20) 0.0u 0.0s 0:00.09 88.8%

0+0k 0+24io 0pf+0w
error: command   /home/hmd/wien18/hf hf.def   failed


  stop error


Please can someone tell me how I can solve  this problem? Thanks in
advance .
Best regards
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] hf error -monolayer

2023-06-21 Thread Brik Hamida
Dear

I have done hf-SFC calculation for BULK semiconductor  and it is well done.
Then , I followed the same hf -calculation steps for MONOLAYER
semiconductor but unfortunately  I have an error in hf ( hf error file) :

 start  (21 جوان, 2023 CET 12:28:26 م) with lapw0 (40/99 to go)

cycle 1 (21 جوان, 2023 CET 12:28:26 م) (40/99 to go)

>   lapw0 -grr  (12:28:26) 6.3u 0.0s 0:06.38 99.8% 0+0k 0+1io 0pf+0w
>   lapw0   (12:28:32) 4.5u 0.0s 0:04.61 100.0% 0+0k 0+2704io 0pf+0w
>   lapw1   (12:28:37) 41.2u 0.5s 0:41.74 100.0% 0+0k 0+72976io 0pf+0w
>   lapw2   (12:29:19) 1.1u 0.0s 0:01.16 100.0% 0+0k 0+3864io 0pf+0w
>   lcore   (12:29:20) 0.0u 0.0s 0:00.05 100.0% 0+0k 0+2984io 0pf+0w
>   hf   -mode1   -redklist (12:29:20) 0.0u 0.0s 0:00.09 88.8% 0+0k 
> 0+24io 0pf+0w
error: command   /home/hmd/wien18/hf hf.def   failed

>   stop error


Please can someone tell me how I can solve  this problem? Thanks in advance
.
Best regards
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Difference in DOS and BAND graphics

2023-06-21 Thread Peter Blaha

You also need to check the other spin !

It looks as if you have almost a semiconductor. This is in full 
agreement with the published paper you quoted.

mBJ or PBE+U will open a gap.



Am 21.06.2023 um 10:48 schrieb Hülya Gürçay:

Dear Prof. Dr. Peter Blaha,

in case.scf file;

  :GAP (global)   :    0.0    Ry =     0.0   eV (metal)
:GAP (this spin):    0.0    Ry =     0.0   eV (metal)
          Bandranges (emin - emax) and occupancy:
:BAN00011:  11   -1.101467   -1.101123  1.
:BAN00012:  12   -0.070526    0.009478  1.
:BAN00013:  13    0.370443    0.551507  1.
:BAN00014:  14    0.456229    0.563318  1.
:BAN00015:  15    0.484108    0.563318  1.
:BAN00016:  16    0.581468    0.737374  1.
:BAN00017:  17    0.616386    0.737374  1.
:BAN00018:  18    0.697221    0.763196  1.
:BAN00019:  19    0.740434    0.802125  1.
:BAN00020:  20    0.751327    0.811238  1.
:BAN00021:  21    0.816449    0.919080  0.02868010
:BAN00022:  22    0.870736    0.932019  0.
:BAN00023:  23    0.873628    0.936869  0.
:BAN00024:  24    0.925832    0.986489  0.
:BAN00025:  25    0.944819    1.002780  0.
:BAN00026:  26    0.970538    1.031454  0.
         Energy to separate low and high energystates:    0.32044

Thanks in advance
Hülya Gürçay

Peter Blaha >, 21 Haz 2023 Çar, 11:12 tarihinde 
şunu yazdı:


Hard to say what goes wrong. Maybe the k-mesh for the bandstructure
does not catch the metallic bands, or

you sed the qtl-file from the band-k-mesh instead of the full scf-grid ?

Anyway, check directly the case.scf file. The label is not called
:BAND but :BAN; to see if it is a metal or an insulator you can also
check if there is a :GAP line (only with TETRA).


Am 21.06.2023 um 06:18 schrieb Hülya Gürçay:

Dear WIEN2k users,

I made optimisation for MnVZrP , i found the equilibrium lattice
parameter 6.07 A,
I used this lattice parameter in SCF calculation,
I copied this SCF file to a new folder and plotted Band and DOS
graphs through the interface.
In the band graph, there is a gap in the spin down channel and the
material is semi-metal, while in the DOS graph, metal appears in
both the spin up and spin down channels.
I both plotted in eV and Ry units,

How can i fix this incompatibility?

XC: GGA-PBE
K points: 20,20,20
Rkmax:8; Lmax:12,
Cc: 0.0001; Ec:0.1

This material has been calculated before with different code ,
here
https://pubs.rsc.org/en/content/articlelanding/2020/ra/d0ra04633g


Sincerely,
Hülya

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at  
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien  

SEARCH the MAILING-LIST 
at:http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html  



-- 
---

Peter Blaha,  Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-158801165300
Email:peter.bl...@tuwien.ac.at     
WWW:http://www.imc.tuwien.ac.at     WIEN2k:http://www.wien2k.at  

-

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at 
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien

SEARCH the MAILING-LIST at:
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html 



___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


--
--
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300
Email: peter.bl...@tuwien.ac.atWIEN2k: http://www.wien2k.at
WWW:   http://www.imc.tuwien.ac.at
-
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  

Re: [Wien] upgradation of wien2k

2023-06-21 Thread Burhan Ahmed
I'm upgrading from wien2k_20 to wien2k_23.2 using the steps mentioned in
the website.

After giving the ./siteconfig_lapw -update wien2k_direcyory(old) and
choosing R recompile option and then A compile all, it is taking
longer than expected time. It's been more than 2.5 hours and still it's
compiling with no error message. Is there any issue in compiling or it
generally take much longer time than the usual installation/update in
simple i3/i5 machine?

On Mon, 5 Jun, 2023, 11:36 am Peter Blaha,  wrote:

> Download the latest version and follow the instructions on the download
> site. There is even a new option for using the old siteconfig options in
> the new distribution.
>
>
> Am 05.06.2023 um 07:38 schrieb Burhan Ahmed:
> > Dear experts,
> >
> > I am having a compact cluster with 100gb RAM, 36 cores and 12 TB SATA
> > hard disk and currently using wien2k_20 in CentOS. How do I upgrade the
> > version of wien2k to wien2k_23.x.
> >
> > Regards
> >
> > Burhan Ahmed
> >
> > *Research Scholar, AUS *
> >
> >
> > ___
> > Wien mailing list
> > Wien@zeus.theochem.tuwien.ac.at
> > http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> > SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
> --
> --
> Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
> Phone: +43-1-58801-165300
> Email: peter.bl...@tuwien.ac.atWIEN2k: http://www.wien2k.at
> WWW:   http://www.imc.tuwien.ac.at
> -
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Difference in DOS and BAND graphics

2023-06-21 Thread Hülya Gürçay
Dear Prof. Dr. Blaha

I checked gap with TETRA,
There is a small gap of about 0.03 eV below the Fermi energy level,

Sincerely
Hülya Gürçay

Hülya Gürçay , 21 Haz 2023 Çar, 11:48 tarihinde
şunu yazdı:

> Dear Prof. Dr. Peter Blaha,
>
> in case.scf file;
>
>  :GAP (global)   :0.0Ry = 0.0   eV (metal)
> :GAP (this spin):0.0Ry = 0.0   eV (metal)
>  Bandranges (emin - emax) and occupancy:
> :BAN00011:  11   -1.101467   -1.101123  1.
> :BAN00012:  12   -0.0705260.009478  1.
> :BAN00013:  130.3704430.551507  1.
> :BAN00014:  140.4562290.563318  1.
> :BAN00015:  150.4841080.563318  1.
> :BAN00016:  160.5814680.737374  1.
> :BAN00017:  170.6163860.737374  1.
> :BAN00018:  180.6972210.763196  1.
> :BAN00019:  190.7404340.802125  1.
> :BAN00020:  200.7513270.811238  1.
> :BAN00021:  210.8164490.919080  0.02868010
> :BAN00022:  220.8707360.932019  0.
> :BAN00023:  230.8736280.936869  0.
> :BAN00024:  240.9258320.986489  0.
> :BAN00025:  250.9448191.002780  0.
> :BAN00026:  260.9705381.031454  0.
> Energy to separate low and high energystates:0.32044
>
> Thanks in advance
> Hülya Gürçay
>
> Peter Blaha , 21 Haz 2023 Çar, 11:12 tarihinde
> şunu yazdı:
>
>> Hard to say what goes wrong. Maybe the k-mesh for the bandstructure does
>> not catch the metallic bands, or
>>
>> you sed the qtl-file from the band-k-mesh instead of the full scf-grid ?
>>
>> Anyway, check directly the case.scf file. The label is not called :BAND
>> but :BAN; to see if it is a metal or an insulator you can also check if
>> there is a :GAP line (only with TETRA).
>>
>>
>> Am 21.06.2023 um 06:18 schrieb Hülya Gürçay:
>>
>> Dear WIEN2k users,
>>
>> I made optimisation for MnVZrP , i found the equilibrium lattice
>> parameter 6.07 A,
>> I used this lattice parameter in SCF calculation,
>> I copied this SCF file to a new folder and plotted Band and DOS graphs
>> through the interface.
>> In the band graph, there is a gap in the spin down channel and the
>> material is semi-metal, while in the DOS graph, metal appears in both the
>> spin up and spin down channels.
>> I both plotted in eV and Ry units,
>>
>> How can i fix this incompatibility?
>>
>> XC: GGA-PBE
>> K points: 20,20,20
>> Rkmax:8; Lmax:12,
>> Cc: 0.0001; Ec:0.1
>>
>> This material has been calculated before with different code , here
>> https://pubs.rsc.org/en/content/articlelanding/2020/ra/d0ra04633g
>>
>> Sincerely,
>> Hülya
>>
>> ___
>> Wien mailing 
>> listw...@zeus.theochem.tuwien.ac.athttp://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>> SEARCH the MAILING-LIST at:  
>> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>>
>> --
>> ---
>> Peter Blaha,  Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
>> Phone: +43-158801165300
>> Email: peter.bl...@tuwien.ac.at
>> WWW:   http://www.imc.tuwien.ac.at  WIEN2k: http://www.wien2k.at
>> -
>>
>> ___
>> Wien mailing list
>> Wien@zeus.theochem.tuwien.ac.at
>> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
>> SEARCH the MAILING-LIST at:
>> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>>
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Difference in DOS and BAND graphics

2023-06-21 Thread Hülya Gürçay
Dear Prof. Dr. Peter Blaha,

in case.scf file;

 :GAP (global)   :0.0Ry = 0.0   eV (metal)
:GAP (this spin):0.0Ry = 0.0   eV (metal)
 Bandranges (emin - emax) and occupancy:
:BAN00011:  11   -1.101467   -1.101123  1.
:BAN00012:  12   -0.0705260.009478  1.
:BAN00013:  130.3704430.551507  1.
:BAN00014:  140.4562290.563318  1.
:BAN00015:  150.4841080.563318  1.
:BAN00016:  160.5814680.737374  1.
:BAN00017:  170.6163860.737374  1.
:BAN00018:  180.6972210.763196  1.
:BAN00019:  190.7404340.802125  1.
:BAN00020:  200.7513270.811238  1.
:BAN00021:  210.8164490.919080  0.02868010
:BAN00022:  220.8707360.932019  0.
:BAN00023:  230.8736280.936869  0.
:BAN00024:  240.9258320.986489  0.
:BAN00025:  250.9448191.002780  0.
:BAN00026:  260.9705381.031454  0.
Energy to separate low and high energystates:0.32044

Thanks in advance
Hülya Gürçay

Peter Blaha , 21 Haz 2023 Çar, 11:12 tarihinde
şunu yazdı:

> Hard to say what goes wrong. Maybe the k-mesh for the bandstructure does
> not catch the metallic bands, or
>
> you sed the qtl-file from the band-k-mesh instead of the full scf-grid ?
>
> Anyway, check directly the case.scf file. The label is not called :BAND
> but :BAN; to see if it is a metal or an insulator you can also check if
> there is a :GAP line (only with TETRA).
>
>
> Am 21.06.2023 um 06:18 schrieb Hülya Gürçay:
>
> Dear WIEN2k users,
>
> I made optimisation for MnVZrP , i found the equilibrium lattice
> parameter 6.07 A,
> I used this lattice parameter in SCF calculation,
> I copied this SCF file to a new folder and plotted Band and DOS graphs
> through the interface.
> In the band graph, there is a gap in the spin down channel and the
> material is semi-metal, while in the DOS graph, metal appears in both the
> spin up and spin down channels.
> I both plotted in eV and Ry units,
>
> How can i fix this incompatibility?
>
> XC: GGA-PBE
> K points: 20,20,20
> Rkmax:8; Lmax:12,
> Cc: 0.0001; Ec:0.1
>
> This material has been calculated before with different code , here
> https://pubs.rsc.org/en/content/articlelanding/2020/ra/d0ra04633g
>
> Sincerely,
> Hülya
>
> ___
> Wien mailing 
> listw...@zeus.theochem.tuwien.ac.athttp://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:  
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
> --
> ---
> Peter Blaha,  Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
> Phone: +43-158801165300
> Email: peter.bl...@tuwien.ac.at
> WWW:   http://www.imc.tuwien.ac.at  WIEN2k: http://www.wien2k.at
> -
>
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Difference in DOS and BAND graphics

2023-06-21 Thread Peter Blaha
Hard to say what goes wrong. Maybe the k-mesh for the bandstructure does 
not catch the metallic bands, or


you sed the qtl-file from the band-k-mesh instead of the full scf-grid ?

Anyway, check directly the case.scf file. The label is not called :BAND 
but :BAN; to see if it is a metal or an insulator you can also check if 
there is a :GAP line (only with TETRA).



Am 21.06.2023 um 06:18 schrieb Hülya Gürçay:

Dear WIEN2k users,

I made optimisation for MnVZrP , i found the equilibrium lattice 
parameter 6.07 A,

I used this lattice parameter in SCF calculation,
I copied this SCF file to a new folder and plotted Band and DOS graphs 
through the interface.
In the band graph, there is a gap in the spin down channel and the 
material is semi-metal, while in the DOS graph, metal appears in both 
the spin up and spin down channels.

I both plotted in eV and Ry units,

How can i fix this incompatibility?

XC: GGA-PBE
K points: 20,20,20
Rkmax:8; Lmax:12,
Cc: 0.0001; Ec:0.1

This material has been calculated before with different code , here 
https://pubs.rsc.org/en/content/articlelanding/2020/ra/d0ra04633g


Sincerely,
Hülya

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST 
at:http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


--
---
Peter Blaha,  Inst. f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-158801165300
Email:peter.bl...@tuwien.ac.at   
WWW:http://www.imc.tuwien.ac.at   WIEN2k:http://www.wien2k.at

-
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] ** testerror: Error in Parallel LAPW

2023-06-21 Thread Peter Blaha



it  crashed with the message  "Host key verification failed. "

Seems that your cluster does not allow   ssh to an allocated node.(Ask 
your sys admin).


In $WIENROOT/WIEN2k_parallel_options  there are variables like

USE_REMOTE.  If set to zero, ssh is not used and you can run in 
parallel, but only on one shared memory node.


In order to use multiple nodes, you need to be able to do passwordless 
ssh to the allocated nodes (or any other command substituting ssh).



Herethe content of file 
/lustre/ukt/milias/scratch/Wien2k_23.2_job.main.N1.n4.jid3009460/LvO2onQg/.machines:

1:lxbk1177
1:lxbk1177
1:lxbk1177
1:lxbk1177
1:lxbk1177
1:lxbk1177
1:lxbk1177
1:lxbk1177

Job is running on lxbk1177, with 8 cpus allocated;

and this is from log :

running x dstart :
starting parallel dstart at Tue 20 Jun 2023 05:16:21 PM CEST
 .machine0 : processors
running dstart in single mode
STOP DSTART ENDS
10.249u 0.322s 0:11.19 94.3%    0+0k 158496+101160io 437pf+0w

running 'run_lapw -p -ec 0.0001 -NI'
STOP  LAPW0 END
Host key verification failed.
[1]  + Done  ( ( $remote $machine[$p] "cd 
$PWD;$set_OMP_NUM_THREADS;$t $taskset0 $exe ${def}_$loop.def ;fixerr
or_lapw ${def}_$loop"; rm -f .lock_$lockfile[$p] ) >& .stdout1_$loop; 
if ( -f .stdout1_$loop ) bashtime2csh.pl_lapw .stdout1_$loop > .
temp1_$loop; grep \% .temp1_$loop >> .time1_$loop; grep -v \% 
.temp1_$loop | perl -e "print stderr " )

Host key verification failed.
[1]  + Done  ( ( $remote $machine[$p] "cd 
$PWD;$set_OMP_NUM_THREADS;$t $taskset0 $exe ${def}_$loop.def 
;fixerror_lapw ${def}_$loop"; rm -f .lock_$lockfile[$p] ) >& .stdo
ut1_$loop; if ( -f .stdout1_$loop ) bashtime2csh.pl_lapw 
.stdout1_$loop > .temp1_$loop; grep \% .temp1_$loop >> .time1_$loop; 
grep -v \% .temp1_$loop | perl -e "print stderr " )

Host key verification failed.
[1]  + Done  ( ( $remote $machine[$p] "cd 
$PWD;$set_OMP_NUM_THREADS;$t $taskset0 $exe ${def}_$loop.def 
;fixerror_lapw ${def}_$loop"; rm -f .lock_$lockfile[$p] ) >& .stdo
ut1_$loop; if ( -f .stdout1_$loop ) bashtime2csh.pl_lapw 
.stdout1_$loop > .temp1_$loop; grep \% .temp1_$loop >> .time1_$loop; 
grep -v \% .temp1_$loop | perl -e "print stderr " )

Host key verification failed.
[1]  + Done  ( ( $remote $machine[$p] "cd 
$PWD;$set_OMP_NUM_THREADS;$t $taskset0 $exe ${def}_$loop.def 
;fixerror_lapw ${def}_$loop"; rm -f .lock_$lockfile[$p] ) >& .stdo
ut1_$loop; if ( -f .stdout1_$loop ) bashtime2csh.pl_lapw 
.stdout1_$loop > .temp1_$loop; grep \% .temp1_$loop >> .time1_$loop; 
grep -v \% .temp1_$loop | perl -e "print stderr " )

Host key verification failed.
[1]  + Done  ( ( $remote $machine[$p] "cd 
$PWD;$set_OMP_NUM_THREADS;$t $taskset0 $exe ${def}_$loop.def 
;fixerror_lapw ${def}_$loop"; rm -f .lock_$lockfile[$p] ) >& .stdo
ut1_$loop; if ( -f .stdout1_$loop ) bashtime2csh.pl_lapw 
.stdout1_$loop > .temp1_$loop; grep \% .temp1_$loop >> .time1_$loop; 
grep -v \% .temp1_$loop | perl -e "print stderr " )

Host key verification failed.
[1]  + Done  ( ( $remote $machine[$p] "cd 
$PWD;$set_OMP_NUM_THREADS;$t $taskset0 $exe ${def}_$loop.def 
;fixerror_lapw ${def}_$loop"; rm -f .lock_$lockfile[$p] ) >& .stdo
ut1_$loop; if ( -f .stdout1_$loop ) bashtime2csh.pl_lapw 
.stdout1_$loop > .temp1_$loop; grep \% .temp1_$loop >> .time1_$loop; 
grep -v \% .temp1_$loop | perl -e "print stderr " )

Host key verification failed.
[1]  + Done  ( ( $remote $machine[$p] "cd 
$PWD;$set_OMP_NUM_THREADS;$t $taskset0 $exe ${def}_$loop.def 
;fixerror_lapw ${def}_$loop"; rm -f .lock_$lockfile[$p] ) >& .stdo
ut1_$loop; if ( -f .stdout1_$loop ) bashtime2csh.pl_lapw 
.stdout1_$loop > .temp1_$loop; grep \% .temp1_$loop >> .time1_$loop; 
grep -v \% .temp1_$loop | perl -e "print stderr " )

Host key verification failed.
[1]    Done  ( ( $remote $machine[$p] "cd 
$PWD;$set_OMP_NUM_THREADS;$t $taskset0 $exe ${def}_$loop.def 
;fixerror_lapw ${def}_$loop"; rm -f .lock_$lockfile[$p] ) >& .stdo
ut1_$loop; if ( -f .stdout1_$loop ) bashtime2csh.pl_lapw 
.stdout1_$loop > .temp1_$loop; grep \% .temp1_$loop >> .time1_$loop; 
grep -v \% .temp1_$loop | perl -e "print stderr " )

LvO2onQg.scf1_1: No such file or directory.
grep: *scf1*: No such file or directory
STOP FERMI - Error
cp: cannot stat '.in.tmp': No such file or directory
grep: *scf1*: No such file or directory

>   stop error



file ":parallel"

starting parallel lapw1 at Tue 20 Jun 2023 05:17:08 PM CEST
lxbk1177(4)  lxbk1177(3)  lxbk1177(3)  lxbk1177(3) 
 lxbk1177(3)  lxbk1177(3)  lxbk1177(3)  l

xbk1177(3)    Summary of lapw1para:
  lxbk1177  k=25    user=0  wallclock=0
<-  done at Tue 20 Jun 2023 05:17:14 PM CEST
-
->  starting Fermi on lxbk1177 at Tue 20 Jun 2023 05:17:15 PM CEST
**  LAPW2 crashed at Tue 20 Jun