Re: [Wien] (no subject)

2019-07-22 Thread Gavin Abo
Consider trying and using WIEN2k 19.1, because on the WIEN2k updates 
page [1] you can see several improvements and fixes have been made to 
the spin orbit code since WIEN2k 17.1.


For example, intel-18.0.1 is a recent ifort compiler.  For SRC_lapwso 
under VERSION_18.1: 1.6.2018, you should see:


get_nloat.f (*fix for read-bug* of unformatted files with recent ifort)

[1] http://susi.theochem.tuwien.ac.at/reg_user/updates/

On 7/22/2019 7:23 AM, Aamir Shafique wrote:


Hello,
I have installed Wien2k 17.1 with compiler intel-18.0.1.
- Normal SCF cycles completed successfully using SCAN meta-GGA or 
PBE-GGA.

- SOC was initiated from the command line using initso_lapw
but I got the following error message:
lapwso lapwso.def   failed

>   stop error

*With Best Regards, *

*Aamir Shafique
*

*-*
*Postdoctoral fellow*

*Physical Sciences and Engineering*

*+King Abdullah University of Science and Technology *

Thuwal 23955, Saudia Arabia.

(*Direct*- +966  54  5351602

**Email* – aamir.shafi...@kaust.edu.sa 
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Parallel run problems with version 19.1

2019-07-22 Thread Peter Blaha

Please:
1) does   x lapw0   work ???
2) list your .machines file. In particular: for TiC use only 2 cores 
(because of 2 atoms)

3) ls -als *output00*
4) what is at the end of *.output  ??? Please check for any errors.

Is your fftw-mpi compiled with the same compiler as wien2k ??


Am 22.07.2019 um 20:45 schrieb Ricardo Moreira:
I had it at 4 as per the default value suggested during configuration 
but I changed it to 1 now. In spite of that, "x lapw0 -p" still did not 
generate case.vspup or case.vspdn.


On Mon, 22 Jul 2019 at 19:01, > wrote:


Do you have the variable OMP_NUM_THREADS set in your .bashrc or .cshrc
file? If yes and the value is greater than 1, then set it to 1 and
execute agian "x lapw0 -p".

On Monday 2019-07-22 18:39, Ricardo Moreira wrote:

 >Date: Mon, 22 Jul 2019 18:39:45
 >From: Ricardo Moreira mailto:ricardopachecomore...@gmail.com>>
 >Reply-To: A Mailing list for WIEN2k users
mailto:wien@zeus.theochem.tuwien.ac.at>>
 >To: A Mailing list for WIEN2k users
mailto:wien@zeus.theochem.tuwien.ac.at>>
 >Subject: Re: [Wien] Parallel run problems with version 19.1
 >
 >That is indeed the case, neither case.vspup or case.vspdn were
generated after running "x lapw0 -p".
 >
 >On Mon, 22 Jul 2019 at 17:09, mailto:t...@theochem.tuwien.ac.at>> wrote:
 >      It seems that lapw0 does not generate case.vspup and
 >      case.vspdn (and case.vsp for non-spin-polarized calculation).
 >      Can you confirm that by executing "x lapw0 -p" on the command
 >      line?
 >
 >      On Monday 2019-07-22 17:45, Ricardo Moreira wrote:
 >
 >      >Date: Mon, 22 Jul 2019 17:45:51
 >      >From: Ricardo Moreira mailto:ricardopachecomore...@gmail.com>>
 >      >Reply-To: A Mailing list for WIEN2k users
mailto:wien@zeus.theochem.tuwien.ac.at>>
 >      >To: A Mailing list for WIEN2k users
mailto:wien@zeus.theochem.tuwien.ac.at>>
 >      >Subject: Re: [Wien] Parallel run problems with version 19.1
 >      >
 >      >The command "ls *vsp*" returns only the files
"TiC.vspdn_st" and
 >      >"TiC.vsp_st", so it would appear that the file is not
created at all when
 >      >using the -p switch to runsp_lapw.
 >      >
 >      >On Mon, 22 Jul 2019 at 16:29, mailto:t...@theochem.tuwien.ac.at>> wrote:
 >      >      Is the file TiC.vspup emtpy?
 >      >
 >      >      On Monday 2019-07-22 17:24, Ricardo Moreira wrote:
 >      >
 >      >      >Date: Mon, 22 Jul 2019 17:24:42
 >      >      >From: Ricardo Moreira
mailto:ricardopachecomore...@gmail.com>>
 >      >      >Reply-To: A Mailing list for WIEN2k users
 >      >      mailto:wien@zeus.theochem.tuwien.ac.at>>
 >      >      >To: A Mailing list for WIEN2k users
 >      >      mailto:wien@zeus.theochem.tuwien.ac.at>>
 >      >      >Subject: Re: [Wien] Parallel run problems with
version 19.1
 >      >      >
 >      >      >Hi and thanks for the reply,
 >      >      >Regarding serial calculations, yes in both non
spin-polarized
 >      >      and spin-polarized everything runs properly in the
cases you
 >      >      described. As
 >      >      >for parallel, it fails in both cases, with the error I
 >      >      indicated in my previous email.
 >      >      >
 >      >      >Best Regards,
 >      >      >Ricardo Moreira
 >      >      >
 >      >      >On Mon, 22 Jul 2019 at 16:09,
mailto:t...@theochem.tuwien.ac.at>>
 >      >      wrote:
 >      >      >      Hi,
 >      >      >
 >      >      >      What you should never do is to mix
spin-polarized and
 >      >      >      non-spin-polarized is the same directory.
 >      >      >
 >      >      >      Since Your explanations about
 >      >      spin-polarized/non-spin-polarized are a
 >      >      >      bit confusing, the question is:
 >      >      >
 >      >      >      Does the calculation run properly (in parallel and
 >      >      serial) if everything
 >      >      >      (init_lapw and run_lapw) in a directory is
done from the
 >      >      beginning in
 >      >      >      non-spin-polarized? Same question with
spin-polarized.
 >      >      >
 >      >      >      F. Tran
 >      >      >
 >      >      >      On Monday 2019-07-22 16:37, Ricardo Moreira wrote:
 >      >      >
 >      >      >      >Date: Mon, 22 Jul 2019 16:37:30
 >      >      >      >From: Ricardo Moreira
mailto:ricardopachecomore...@gmail.com>>
 >      >      >      >Reply-To: A Mailing list for WIEN2k users
 >      >      mailto:wien@zeus.theochem.tuwien.ac.at>>
 >      >      >      >To: wien@zeus.theochem.tuwien.ac.at

 >      >      >      >Subject: [Wien] 

Re: [Wien] Parallel run problems with version 19.1

2019-07-22 Thread tran

More questions:

Was the calculation really initialized with spin-polarization?
If not, then only case.vsp is generated.

What is the message on the screen when "x lapw0 -p" is executed?


On Monday 2019-07-22 20:45, Ricardo Moreira wrote:


Date: Mon, 22 Jul 2019 20:45:22
From: Ricardo Moreira 
Reply-To: A Mailing list for WIEN2k users 
To: A Mailing list for WIEN2k users 
Subject: Re: [Wien] Parallel run problems with version 19.1

I had it at 4 as per the default value suggested during configuration but I changed it to 
1 now. In spite of that, "x lapw0 -p" still did not generate
case.vspup or case.vspdn.

On Mon, 22 Jul 2019 at 19:01,  wrote:
 Do you have the variable OMP_NUM_THREADS set in your .bashrc or .cshrc
 file? If yes and the value is greater than 1, then set it to 1 and
 execute agian "x lapw0 -p".

 On Monday 2019-07-22 18:39, Ricardo Moreira wrote:

 >Date: Mon, 22 Jul 2019 18:39:45
 >From: Ricardo Moreira 
 >Reply-To: A Mailing list for WIEN2k users 

 >To: A Mailing list for WIEN2k users 
 >Subject: Re: [Wien] Parallel run problems with version 19.1
 >
 >That is indeed the case, neither case.vspup or case.vspdn were generated after 
running "x lapw0 -p".
 >
 >On Mon, 22 Jul 2019 at 17:09,  wrote:
 >      It seems that lapw0 does not generate case.vspup and
 >      case.vspdn (and case.vsp for non-spin-polarized calculation).
 >      Can you confirm that by executing "x lapw0 -p" on the command
 >      line?
 >
 >      On Monday 2019-07-22 17:45, Ricardo Moreira wrote:
 >
 >      >Date: Mon, 22 Jul 2019 17:45:51
 >      >From: Ricardo Moreira 
 >      >Reply-To: A Mailing list for WIEN2k users 

 >      >To: A Mailing list for WIEN2k users 

 >      >Subject: Re: [Wien] Parallel run problems with version 19.1
 >      >
 >      >The command "ls *vsp*" returns only the files "TiC.vspdn_st" and
 >      >"TiC.vsp_st", so it would appear that the file is not created at 
all when
 >      >using the -p switch to runsp_lapw.
 >      >
 >      >On Mon, 22 Jul 2019 at 16:29,  wrote:
 >      >      Is the file TiC.vspup emtpy?
 >      >
 >      >      On Monday 2019-07-22 17:24, Ricardo Moreira wrote:
 >      >
 >      >      >Date: Mon, 22 Jul 2019 17:24:42
 >      >      >From: Ricardo Moreira 
 >      >      >Reply-To: A Mailing list for WIEN2k users
 >      >      
 >      >      >To: A Mailing list for WIEN2k users
 >      >      
 >      >      >Subject: Re: [Wien] Parallel run problems with version 19.1
 >      >      >
 >      >      >Hi and thanks for the reply,
 >      >      >Regarding serial calculations, yes in both non 
spin-polarized
 >      >      and spin-polarized everything runs properly in the cases you
 >      >      described. As
 >      >      >for parallel, it fails in both cases, with the error I
 >      >      indicated in my previous email.
 >      >      >
 >      >      >Best Regards,
 >      >      >Ricardo Moreira
 >      >      >
 >      >      >On Mon, 22 Jul 2019 at 16:09, 
 >      >      wrote:
 >      >      >      Hi,
 >      >      >
 >      >      >      What you should never do is to mix spin-polarized and
 >      >      >      non-spin-polarized is the same directory.
 >      >      >
 >      >      >      Since Your explanations about
 >      >      spin-polarized/non-spin-polarized are a
 >      >      >      bit confusing, the question is:
 >      >      >
 >      >      >      Does the calculation run properly (in parallel and
 >      >      serial) if everything
 >      >      >      (init_lapw and run_lapw) in a directory is done from 
the
 >      >      beginning in
 >      >      >      non-spin-polarized? Same question with spin-polarized.
 >      >      >
 >      >      >      F. Tran
 >      >      >
 >      >      >      On Monday 2019-07-22 16:37, Ricardo Moreira wrote:
 >      >      >
 >      >      >      >Date: Mon, 22 Jul 2019 16:37:30
 >      >      >      >From: Ricardo Moreira 

 >      >      >      >Reply-To: A Mailing list for WIEN2k users
 >      >      
 >      >      >      >To: wien@zeus.theochem.tuwien.ac.at
 >      >      >      >Subject: [Wien] Parallel run problems with version 
19.1
 >      >      >      >
 >      >      >      >Dear Wien2k users,
 >      >      >      >I am running Wien2k on a computer cluster, compiled 
with
 >      >      the GNU compilers version 7.2.3, OpenMPI with the operating
 >      >      system
 >      >      >      Scientific Linux release
 >      >      >      >7.4. Since changing from version 18.2 to 19.1 I've 
been
 >      >      unable to run Wien2k in parallel (neither mpi or simple
 >      >      k-parallel
 >      >      >      seem to 

Re: [Wien] Parallel run problems with version 19.1

2019-07-22 Thread Ricardo Moreira
I had it at 4 as per the default value suggested during configuration but I
changed it to 1 now. In spite of that, "x lapw0 -p" still did not generate
case.vspup or case.vspdn.

On Mon, 22 Jul 2019 at 19:01,  wrote:

> Do you have the variable OMP_NUM_THREADS set in your .bashrc or .cshrc
> file? If yes and the value is greater than 1, then set it to 1 and
> execute agian "x lapw0 -p".
>
> On Monday 2019-07-22 18:39, Ricardo Moreira wrote:
>
> >Date: Mon, 22 Jul 2019 18:39:45
> >From: Ricardo Moreira 
> >Reply-To: A Mailing list for WIEN2k users <
> wien@zeus.theochem.tuwien.ac.at>
> >To: A Mailing list for WIEN2k users 
> >Subject: Re: [Wien] Parallel run problems with version 19.1
> >
> >That is indeed the case, neither case.vspup or case.vspdn were generated
> after running "x lapw0 -p".
> >
> >On Mon, 22 Jul 2019 at 17:09,  wrote:
> >  It seems that lapw0 does not generate case.vspup and
> >  case.vspdn (and case.vsp for non-spin-polarized calculation).
> >  Can you confirm that by executing "x lapw0 -p" on the command
> >  line?
> >
> >  On Monday 2019-07-22 17:45, Ricardo Moreira wrote:
> >
> >  >Date: Mon, 22 Jul 2019 17:45:51
> >  >From: Ricardo Moreira 
> >  >Reply-To: A Mailing list for WIEN2k users <
> wien@zeus.theochem.tuwien.ac.at>
> >  >To: A Mailing list for WIEN2k users <
> wien@zeus.theochem.tuwien.ac.at>
> >  >Subject: Re: [Wien] Parallel run problems with version 19.1
> >  >
> >  >The command "ls *vsp*" returns only the files "TiC.vspdn_st" and
> >  >"TiC.vsp_st", so it would appear that the file is not created at
> all when
> >  >using the -p switch to runsp_lapw.
> >  >
> >  >On Mon, 22 Jul 2019 at 16:29,  wrote:
> >  >  Is the file TiC.vspup emtpy?
> >  >
> >  >  On Monday 2019-07-22 17:24, Ricardo Moreira wrote:
> >  >
> >  >  >Date: Mon, 22 Jul 2019 17:24:42
> >  >  >From: Ricardo Moreira 
> >  >  >Reply-To: A Mailing list for WIEN2k users
> >  >  
> >  >  >To: A Mailing list for WIEN2k users
> >  >  
> >  >  >Subject: Re: [Wien] Parallel run problems with version 19.1
> >  >  >
> >  >  >Hi and thanks for the reply,
> >  >  >Regarding serial calculations, yes in both non
> spin-polarized
> >  >  and spin-polarized everything runs properly in the cases you
> >  >  described. As
> >  >  >for parallel, it fails in both cases, with the error I
> >  >  indicated in my previous email.
> >  >  >
> >  >  >Best Regards,
> >  >  >Ricardo Moreira
> >  >  >
> >  >  >On Mon, 22 Jul 2019 at 16:09, 
> >  >  wrote:
> >  >  >  Hi,
> >  >  >
> >  >  >  What you should never do is to mix spin-polarized and
> >  >  >  non-spin-polarized is the same directory.
> >  >  >
> >  >  >  Since Your explanations about
> >  >  spin-polarized/non-spin-polarized are a
> >  >  >  bit confusing, the question is:
> >  >  >
> >  >  >  Does the calculation run properly (in parallel and
> >  >  serial) if everything
> >  >  >  (init_lapw and run_lapw) in a directory is done from
> the
> >  >  beginning in
> >  >  >  non-spin-polarized? Same question with spin-polarized.
> >  >  >
> >  >  >  F. Tran
> >  >  >
> >  >  >  On Monday 2019-07-22 16:37, Ricardo Moreira wrote:
> >  >  >
> >  >  >  >Date: Mon, 22 Jul 2019 16:37:30
> >  >  >  >From: Ricardo Moreira <
> ricardopachecomore...@gmail.com>
> >  >  >  >Reply-To: A Mailing list for WIEN2k users
> >  >  
> >  >  >  >To: wien@zeus.theochem.tuwien.ac.at
> >  >  >  >Subject: [Wien] Parallel run problems with version
> 19.1
> >  >  >  >
> >  >  >  >Dear Wien2k users,
> >  >  >  >I am running Wien2k on a computer cluster, compiled
> with
> >  >  the GNU compilers version 7.2.3, OpenMPI with the operating
> >  >  system
> >  >  >  Scientific Linux release
> >  >  >  >7.4. Since changing from version 18.2 to 19.1 I've
> been
> >  >  unable to run Wien2k in parallel (neither mpi or simple
> >  >  k-parallel
> >  >  >  seem to work), with
> >  >  >  >calculations aborting with the following message:
> >  >  >  >
> >  >  >  >start   (Mon Jul 22 14:49:31 WEST 2019) with
> >  >  lapw0 (40/99 to go)
> >  >  >  >
> >  >  >  >cycle 1 (Mon Jul 22 14:49:31 WEST 2019)
>
> >  >  (40/99 to go)
> >  >  >  >
> >  >  >  >>   lapw0   -p  (14:49:31) starting parallel lapw0 at
> >  >  Mon Jul 22 14:49:31 WEST 2019
> >  >  >  > .machine0 : 8 processors
> 

Re: [Wien] Parallel run problems with version 19.1

2019-07-22 Thread tran

Do you have the variable OMP_NUM_THREADS set in your .bashrc or .cshrc
file? If yes and the value is greater than 1, then set it to 1 and
execute agian "x lapw0 -p".

On Monday 2019-07-22 18:39, Ricardo Moreira wrote:


Date: Mon, 22 Jul 2019 18:39:45
From: Ricardo Moreira 
Reply-To: A Mailing list for WIEN2k users 
To: A Mailing list for WIEN2k users 
Subject: Re: [Wien] Parallel run problems with version 19.1

That is indeed the case, neither case.vspup or case.vspdn were generated after running 
"x lapw0 -p".

On Mon, 22 Jul 2019 at 17:09,  wrote:
 It seems that lapw0 does not generate case.vspup and
 case.vspdn (and case.vsp for non-spin-polarized calculation).
 Can you confirm that by executing "x lapw0 -p" on the command
 line?

 On Monday 2019-07-22 17:45, Ricardo Moreira wrote:

 >Date: Mon, 22 Jul 2019 17:45:51
 >From: Ricardo Moreira 
 >Reply-To: A Mailing list for WIEN2k users 

 >To: A Mailing list for WIEN2k users 
 >Subject: Re: [Wien] Parallel run problems with version 19.1
 >
 >The command "ls *vsp*" returns only the files "TiC.vspdn_st" and
 >"TiC.vsp_st", so it would appear that the file is not created at all when
 >using the -p switch to runsp_lapw.
 >
 >On Mon, 22 Jul 2019 at 16:29,  wrote:
 >      Is the file TiC.vspup emtpy?
 >
 >      On Monday 2019-07-22 17:24, Ricardo Moreira wrote:
 >
 >      >Date: Mon, 22 Jul 2019 17:24:42
 >      >From: Ricardo Moreira 
 >      >Reply-To: A Mailing list for WIEN2k users
 >      
 >      >To: A Mailing list for WIEN2k users
 >      
 >      >Subject: Re: [Wien] Parallel run problems with version 19.1
 >      >
 >      >Hi and thanks for the reply,
 >      >Regarding serial calculations, yes in both non spin-polarized
 >      and spin-polarized everything runs properly in the cases you
 >      described. As
 >      >for parallel, it fails in both cases, with the error I
 >      indicated in my previous email.
 >      >
 >      >Best Regards,
 >      >Ricardo Moreira
 >      >
 >      >On Mon, 22 Jul 2019 at 16:09, 
 >      wrote:
 >      >      Hi,
 >      >
 >      >      What you should never do is to mix spin-polarized and
 >      >      non-spin-polarized is the same directory.
 >      >
 >      >      Since Your explanations about
 >      spin-polarized/non-spin-polarized are a
 >      >      bit confusing, the question is:
 >      >
 >      >      Does the calculation run properly (in parallel and
 >      serial) if everything
 >      >      (init_lapw and run_lapw) in a directory is done from the
 >      beginning in
 >      >      non-spin-polarized? Same question with spin-polarized.
 >      >
 >      >      F. Tran
 >      >
 >      >      On Monday 2019-07-22 16:37, Ricardo Moreira wrote:
 >      >
 >      >      >Date: Mon, 22 Jul 2019 16:37:30
 >      >      >From: Ricardo Moreira 
 >      >      >Reply-To: A Mailing list for WIEN2k users
 >      
 >      >      >To: wien@zeus.theochem.tuwien.ac.at
 >      >      >Subject: [Wien] Parallel run problems with version 19.1
 >      >      >
 >      >      >Dear Wien2k users,
 >      >      >I am running Wien2k on a computer cluster, compiled with
 >      the GNU compilers version 7.2.3, OpenMPI with the operating
 >      system
 >      >      Scientific Linux release
 >      >      >7.4. Since changing from version 18.2 to 19.1 I've been
 >      unable to run Wien2k in parallel (neither mpi or simple
 >      k-parallel
 >      >      seem to work), with
 >      >      >calculations aborting with the following message:
 >      >      >
 >      >      >    start       (Mon Jul 22 14:49:31 WEST 2019) with
 >      lapw0 (40/99 to go)
 >      >      >
 >      >      >    cycle 1     (Mon Jul 22 14:49:31 WEST 2019)        
 >      (40/99 to go)
 >      >      >
 >      >      >>   lapw0   -p  (14:49:31) starting parallel lapw0 at
 >      Mon Jul 22 14:49:31 WEST 2019
 >      >      > .machine0 : 8 processors
 >      >      >0.058u 0.160s 0:03.50 6.0%      0+0k 48+344io 5pf+0w
 >      >      >>   lapw1  -up -p       (14:49:35) starting parallel
 >      lapw1 at Mon Jul 22 14:49:35 WEST 2019
 >      >      >->  starting parallel LAPW1 jobs at Mon Jul 22 14:49:35
 >      WEST 2019
 >      >      >running LAPW1 in parallel mode (using .machines)
 >      >      >2 number_of_parallel_jobs
 >      >      >     ava01 ava01 ava01 ava01(8)      ava21 ava21 ava21
 >      ava21(8)    Summary of lapw1para:
 >      >      >   ava01         k=8     user=0  wallclock=0
 >      >      >   ava21         k=16    user=0  wallclock=0
 >      >      >**  LAPW1 crashed!
 >      >      >0.164u 0.306s 0:03.82 12.0%     

Re: [Wien] Parallel run problems with version 19.1

2019-07-22 Thread Ricardo Moreira
That is indeed the case, neither case.vspup or case.vspdn were generated
after running "x lapw0 -p".

On Mon, 22 Jul 2019 at 17:09,  wrote:

> It seems that lapw0 does not generate case.vspup and
> case.vspdn (and case.vsp for non-spin-polarized calculation).
> Can you confirm that by executing "x lapw0 -p" on the command
> line?
>
> On Monday 2019-07-22 17:45, Ricardo Moreira wrote:
>
> >Date: Mon, 22 Jul 2019 17:45:51
> >From: Ricardo Moreira 
> >Reply-To: A Mailing list for WIEN2k users <
> wien@zeus.theochem.tuwien.ac.at>
> >To: A Mailing list for WIEN2k users 
> >Subject: Re: [Wien] Parallel run problems with version 19.1
> >
> >The command "ls *vsp*" returns only the files "TiC.vspdn_st" and
> >"TiC.vsp_st", so it would appear that the file is not created at all when
> >using the -p switch to runsp_lapw.
> >
> >On Mon, 22 Jul 2019 at 16:29,  wrote:
> >  Is the file TiC.vspup emtpy?
> >
> >  On Monday 2019-07-22 17:24, Ricardo Moreira wrote:
> >
> >  >Date: Mon, 22 Jul 2019 17:24:42
> >  >From: Ricardo Moreira 
> >  >Reply-To: A Mailing list for WIEN2k users
> >  
> >  >To: A Mailing list for WIEN2k users
> >  
> >  >Subject: Re: [Wien] Parallel run problems with version 19.1
> >  >
> >  >Hi and thanks for the reply,
> >  >Regarding serial calculations, yes in both non spin-polarized
> >  and spin-polarized everything runs properly in the cases you
> >  described. As
> >  >for parallel, it fails in both cases, with the error I
> >  indicated in my previous email.
> >  >
> >  >Best Regards,
> >  >Ricardo Moreira
> >  >
> >  >On Mon, 22 Jul 2019 at 16:09, 
> >  wrote:
> >  >  Hi,
> >  >
> >  >  What you should never do is to mix spin-polarized and
> >  >  non-spin-polarized is the same directory.
> >  >
> >  >  Since Your explanations about
> >  spin-polarized/non-spin-polarized are a
> >  >  bit confusing, the question is:
> >  >
> >  >  Does the calculation run properly (in parallel and
> >  serial) if everything
> >  >  (init_lapw and run_lapw) in a directory is done from the
> >  beginning in
> >  >  non-spin-polarized? Same question with spin-polarized.
> >  >
> >  >  F. Tran
> >  >
> >  >  On Monday 2019-07-22 16:37, Ricardo Moreira wrote:
> >  >
> >  >  >Date: Mon, 22 Jul 2019 16:37:30
> >  >  >From: Ricardo Moreira 
> >  >  >Reply-To: A Mailing list for WIEN2k users
> >  
> >  >  >To: wien@zeus.theochem.tuwien.ac.at
> >  >  >Subject: [Wien] Parallel run problems with version 19.1
> >  >  >
> >  >  >Dear Wien2k users,
> >  >  >I am running Wien2k on a computer cluster, compiled with
> >  the GNU compilers version 7.2.3, OpenMPI with the operating
> >  system
> >  >  Scientific Linux release
> >  >  >7.4. Since changing from version 18.2 to 19.1 I've been
> >  unable to run Wien2k in parallel (neither mpi or simple
> >  k-parallel
> >  >  seem to work), with
> >  >  >calculations aborting with the following message:
> >  >  >
> >  >  >start   (Mon Jul 22 14:49:31 WEST 2019) with
> >  lapw0 (40/99 to go)
> >  >  >
> >  >  >cycle 1 (Mon Jul 22 14:49:31 WEST 2019)
> >  (40/99 to go)
> >  >  >
> >  >  >>   lapw0   -p  (14:49:31) starting parallel lapw0 at
> >  Mon Jul 22 14:49:31 WEST 2019
> >  >  > .machine0 : 8 processors
> >  >  >0.058u 0.160s 0:03.50 6.0%  0+0k 48+344io 5pf+0w
> >  >  >>   lapw1  -up -p   (14:49:35) starting parallel
> >  lapw1 at Mon Jul 22 14:49:35 WEST 2019
> >  >  >->  starting parallel LAPW1 jobs at Mon Jul 22 14:49:35
> >  WEST 2019
> >  >  >running LAPW1 in parallel mode (using .machines)
> >  >  >2 number_of_parallel_jobs
> >  >  > ava01 ava01 ava01 ava01(8)  ava21 ava21 ava21
> >  ava21(8)Summary of lapw1para:
> >  >  >   ava01 k=8 user=0  wallclock=0
> >  >  >   ava21 k=16user=0  wallclock=0
> >  >  >**  LAPW1 crashed!
> >  >  >0.164u 0.306s 0:03.82 12.0% 0+0k 112+648io 1pf+0w
> >  >  >error: command
> >  /homes/fc-up201202493/WIEN2k_19.1/lapw1para -up uplapw1.def
> >  failed
> >  >  >
> >  >  >>   stop error
> >  >  >
> >  >  >Inspecting the error files I find that the error printed
> >  to uplapw1.error is:
> >  >  >
> >  >  >**  Error in Parallel LAPW1
> >  >  >**  LAPW1 STOPPED at Mon Jul 22 14:49:39 WEST 2019
> >  >  >**  check ERROR FILES!
> >  >  > 'INILPW' - can't open unit:  18
> >
> >
> >  >
> >  >  > 'INILPW' -filename: TiC.vspup
> >
> >
> >  >
> >  >  > 'INILPW' -  status: old  

Re: [Wien] Parallel run problems with version 19.1

2019-07-22 Thread tran

It seems that lapw0 does not generate case.vspup and
case.vspdn (and case.vsp for non-spin-polarized calculation).
Can you confirm that by executing "x lapw0 -p" on the command
line?

On Monday 2019-07-22 17:45, Ricardo Moreira wrote:


Date: Mon, 22 Jul 2019 17:45:51
From: Ricardo Moreira 
Reply-To: A Mailing list for WIEN2k users 
To: A Mailing list for WIEN2k users 
Subject: Re: [Wien] Parallel run problems with version 19.1

The command "ls *vsp*" returns only the files "TiC.vspdn_st" and
"TiC.vsp_st", so it would appear that the file is not created at all when
using the -p switch to runsp_lapw.

On Mon, 22 Jul 2019 at 16:29,  wrote:
 Is the file TiC.vspup emtpy?

 On Monday 2019-07-22 17:24, Ricardo Moreira wrote:

 >Date: Mon, 22 Jul 2019 17:24:42
 >From: Ricardo Moreira 
 >Reply-To: A Mailing list for WIEN2k users
 
 >To: A Mailing list for WIEN2k users
 
 >Subject: Re: [Wien] Parallel run problems with version 19.1
 >
 >Hi and thanks for the reply,
 >Regarding serial calculations, yes in both non spin-polarized
 and spin-polarized everything runs properly in the cases you
 described. As
 >for parallel, it fails in both cases, with the error I
 indicated in my previous email.
 >
 >Best Regards,
 >Ricardo Moreira
 >
 >On Mon, 22 Jul 2019 at 16:09, 
 wrote:
 >      Hi,
 >
 >      What you should never do is to mix spin-polarized and
 >      non-spin-polarized is the same directory.
 >
 >      Since Your explanations about
 spin-polarized/non-spin-polarized are a
 >      bit confusing, the question is:
 >
 >      Does the calculation run properly (in parallel and
 serial) if everything
 >      (init_lapw and run_lapw) in a directory is done from the
 beginning in
 >      non-spin-polarized? Same question with spin-polarized.
 >
 >      F. Tran
 >
 >      On Monday 2019-07-22 16:37, Ricardo Moreira wrote:
 >
 >      >Date: Mon, 22 Jul 2019 16:37:30
 >      >From: Ricardo Moreira 
 >      >Reply-To: A Mailing list for WIEN2k users
 
 >      >To: wien@zeus.theochem.tuwien.ac.at
 >      >Subject: [Wien] Parallel run problems with version 19.1
 >      >
 >      >Dear Wien2k users,
 >      >I am running Wien2k on a computer cluster, compiled with
 the GNU compilers version 7.2.3, OpenMPI with the operating
 system
 >      Scientific Linux release
 >      >7.4. Since changing from version 18.2 to 19.1 I've been
 unable to run Wien2k in parallel (neither mpi or simple
 k-parallel
 >      seem to work), with
 >      >calculations aborting with the following message:
 >      >
 >      >    start       (Mon Jul 22 14:49:31 WEST 2019) with
 lapw0 (40/99 to go)
 >      >
 >      >    cycle 1     (Mon Jul 22 14:49:31 WEST 2019)        
 (40/99 to go)
 >      >
 >      >>   lapw0   -p  (14:49:31) starting parallel lapw0 at
 Mon Jul 22 14:49:31 WEST 2019
 >      > .machine0 : 8 processors
 >      >0.058u 0.160s 0:03.50 6.0%      0+0k 48+344io 5pf+0w
 >      >>   lapw1  -up -p       (14:49:35) starting parallel
 lapw1 at Mon Jul 22 14:49:35 WEST 2019
 >      >->  starting parallel LAPW1 jobs at Mon Jul 22 14:49:35
 WEST 2019
 >      >running LAPW1 in parallel mode (using .machines)
 >      >2 number_of_parallel_jobs
 >      >     ava01 ava01 ava01 ava01(8)      ava21 ava21 ava21
 ava21(8)    Summary of lapw1para:
 >      >   ava01         k=8     user=0  wallclock=0
 >      >   ava21         k=16    user=0  wallclock=0
 >      >**  LAPW1 crashed!
 >      >0.164u 0.306s 0:03.82 12.0%     0+0k 112+648io 1pf+0w
 >      >error: command  
 /homes/fc-up201202493/WIEN2k_19.1/lapw1para -up uplapw1.def  
 failed
 >      >
 >      >>   stop error
 >      >
 >      >Inspecting the error files I find that the error printed
 to uplapw1.error is:
 >      >
 >      >**  Error in Parallel LAPW1
 >      >**  LAPW1 STOPPED at Mon Jul 22 14:49:39 WEST 2019
 >      >**  check ERROR FILES!
 >      > 'INILPW' - can't open unit:  18                       
                                                                
      
 >                
 >      > 'INILPW' -        filename: TiC.vspup                 
                                                                
      
 >              
 >      > 'INILPW' -          status: old          form:
 formatted                                                       
            
 >             
 >      > 'LAPW1' - INILPW aborted unsuccessfully.
 >      > 'INILPW' - can't open unit:  18                       
                                                                
      
 >                
 >      > 'INILPW' -        filename: TiC.vspup   

Re: [Wien] Parallel run problems with version 19.1

2019-07-22 Thread Ricardo Moreira
The command "ls *vsp*" returns only the files "TiC.vspdn_st" and
"TiC.vsp_st", so it would appear that the file is not created at all when
using the -p switch to runsp_lapw.


On Mon, 22 Jul 2019 at 16:29,  wrote:

> Is the file TiC.vspup emtpy?
>
> On Monday 2019-07-22 17:24, Ricardo Moreira wrote:
>
> >Date: Mon, 22 Jul 2019 17:24:42
> >From: Ricardo Moreira 
> >Reply-To: A Mailing list for WIEN2k users <
> wien@zeus.theochem.tuwien.ac.at>
> >To: A Mailing list for WIEN2k users 
> >Subject: Re: [Wien] Parallel run problems with version 19.1
> >
> >Hi and thanks for the reply,
> >Regarding serial calculations, yes in both non spin-polarized and
> spin-polarized everything runs properly in the cases you described. As
> >for parallel, it fails in both cases, with the error I indicated in my
> previous email.
> >
> >Best Regards,
> >Ricardo Moreira
> >
> >On Mon, 22 Jul 2019 at 16:09,  wrote:
> >  Hi,
> >
> >  What you should never do is to mix spin-polarized and
> >  non-spin-polarized is the same directory.
> >
> >  Since Your explanations about spin-polarized/non-spin-polarized are
> a
> >  bit confusing, the question is:
> >
> >  Does the calculation run properly (in parallel and serial) if
> everything
> >  (init_lapw and run_lapw) in a directory is done from the beginning
> in
> >  non-spin-polarized? Same question with spin-polarized.
> >
> >  F. Tran
> >
> >  On Monday 2019-07-22 16:37, Ricardo Moreira wrote:
> >
> >  >Date: Mon, 22 Jul 2019 16:37:30
> >  >From: Ricardo Moreira 
> >  >Reply-To: A Mailing list for WIEN2k users <
> wien@zeus.theochem.tuwien.ac.at>
> >  >To: wien@zeus.theochem.tuwien.ac.at
> >  >Subject: [Wien] Parallel run problems with version 19.1
> >  >
> >  >Dear Wien2k users,
> >  >I am running Wien2k on a computer cluster, compiled with the GNU
> compilers version 7.2.3, OpenMPI with the operating system
> >  Scientific Linux release
> >  >7.4. Since changing from version 18.2 to 19.1 I've been unable to
> run Wien2k in parallel (neither mpi or simple k-parallel
> >  seem to work), with
> >  >calculations aborting with the following message:
> >  >
> >  >start   (Mon Jul 22 14:49:31 WEST 2019) with lapw0 (40/99
> to go)
> >  >
> >  >cycle 1 (Mon Jul 22 14:49:31 WEST 2019) (40/99 to
> go)
> >  >
> >  >>   lapw0   -p  (14:49:31) starting parallel lapw0 at Mon Jul 22
> 14:49:31 WEST 2019
> >  > .machine0 : 8 processors
> >  >0.058u 0.160s 0:03.50 6.0%  0+0k 48+344io 5pf+0w
> >  >>   lapw1  -up -p   (14:49:35) starting parallel lapw1 at Mon
> Jul 22 14:49:35 WEST 2019
> >  >->  starting parallel LAPW1 jobs at Mon Jul 22 14:49:35 WEST 2019
> >  >running LAPW1 in parallel mode (using .machines)
> >  >2 number_of_parallel_jobs
> >  > ava01 ava01 ava01 ava01(8)  ava21 ava21 ava21 ava21(8)
>  Summary of lapw1para:
> >  >   ava01 k=8 user=0  wallclock=0
> >  >   ava21 k=16user=0  wallclock=0
> >  >**  LAPW1 crashed!
> >  >0.164u 0.306s 0:03.82 12.0% 0+0k 112+648io 1pf+0w
> >  >error: command   /homes/fc-up201202493/WIEN2k_19.1/lapw1para -up
> uplapw1.def   failed
> >  >
> >  >>   stop error
> >  >
> >  >Inspecting the error files I find that the error printed to
> uplapw1.error is:
> >  >
> >  >**  Error in Parallel LAPW1
> >  >**  LAPW1 STOPPED at Mon Jul 22 14:49:39 WEST 2019
> >  >**  check ERROR FILES!
> >  > 'INILPW' - can't open unit:  18
>
> >
> >  > 'INILPW' -filename: TiC.vspup
>
> >
> >  > 'INILPW' -  status: old  form: formatted
>
> >
> >  > 'LAPW1' - INILPW aborted unsuccessfully.
> >  > 'INILPW' - can't open unit:  18
>
> >
> >  > 'INILPW' -filename: TiC.vspup
>
> >
> >  > 'INILPW' -  status: old  form: formatted
>
> >
> >  > 'LAPW1' - INILPW aborted unsuccessfully.
> >  >
> >  >As this error message on previous posts to the mailing lists is
> often pointed out as being due to running init_lapw for a non
> >  spin-polarized case
> >  >and then using runsp_lapw I should clarify that this also occurs
> when attempting to run a non spin-polarized case and instead
> >  of TiC.vspup it
> >  >changes to TiC.vsp in the error message.
> >  >I should point out, for it may be related to this issue that
> serial runs have the problem that after I perform my first
> >  simulation on a folder if I
> >  >first start with a spin-polarized case and then do another
> init_lapw for non spin-polarized and attempt to do run_lapw I get
> >  the errors as in before
> >  >of "can't open unit: 18" (this also occurs if I first run a non
> spin-polarized simulation and then attempt to do a
> >  spin-polarized one on the same
> >  >folder). The workaround I found for this was making a new folder,
> but since 

Re: [Wien] Parallel run problems with version 19.1

2019-07-22 Thread tran

Is the file TiC.vspup emtpy?

On Monday 2019-07-22 17:24, Ricardo Moreira wrote:


Date: Mon, 22 Jul 2019 17:24:42
From: Ricardo Moreira 
Reply-To: A Mailing list for WIEN2k users 
To: A Mailing list for WIEN2k users 
Subject: Re: [Wien] Parallel run problems with version 19.1

Hi and thanks for the reply,
Regarding serial calculations, yes in both non spin-polarized and 
spin-polarized everything runs properly in the cases you described. As
for parallel, it fails in both cases, with the error I indicated in my previous 
email.

Best Regards,
Ricardo Moreira

On Mon, 22 Jul 2019 at 16:09,  wrote:
 Hi,

 What you should never do is to mix spin-polarized and
 non-spin-polarized is the same directory.

 Since Your explanations about spin-polarized/non-spin-polarized are a
 bit confusing, the question is:

 Does the calculation run properly (in parallel and serial) if everything
 (init_lapw and run_lapw) in a directory is done from the beginning in
 non-spin-polarized? Same question with spin-polarized.

 F. Tran

 On Monday 2019-07-22 16:37, Ricardo Moreira wrote:

 >Date: Mon, 22 Jul 2019 16:37:30
 >From: Ricardo Moreira 
 >Reply-To: A Mailing list for WIEN2k users 

 >To: wien@zeus.theochem.tuwien.ac.at
 >Subject: [Wien] Parallel run problems with version 19.1
 >
 >Dear Wien2k users,
 >I am running Wien2k on a computer cluster, compiled with the GNU 
compilers version 7.2.3, OpenMPI with the operating system
 Scientific Linux release
 >7.4. Since changing from version 18.2 to 19.1 I've been unable to run 
Wien2k in parallel (neither mpi or simple k-parallel
 seem to work), with
 >calculations aborting with the following message:
 >
 >    start       (Mon Jul 22 14:49:31 WEST 2019) with lapw0 (40/99 to go)
 >
 >    cycle 1     (Mon Jul 22 14:49:31 WEST 2019)         (40/99 to go)
 >
 >>   lapw0   -p  (14:49:31) starting parallel lapw0 at Mon Jul 22 14:49:31 
WEST 2019
 > .machine0 : 8 processors
 >0.058u 0.160s 0:03.50 6.0%      0+0k 48+344io 5pf+0w
 >>   lapw1  -up -p       (14:49:35) starting parallel lapw1 at Mon Jul 22 
14:49:35 WEST 2019
 >->  starting parallel LAPW1 jobs at Mon Jul 22 14:49:35 WEST 2019
 >running LAPW1 in parallel mode (using .machines)
 >2 number_of_parallel_jobs
 >     ava01 ava01 ava01 ava01(8)      ava21 ava21 ava21 ava21(8)    
Summary of lapw1para:
 >   ava01         k=8     user=0  wallclock=0
 >   ava21         k=16    user=0  wallclock=0
 >**  LAPW1 crashed!
 >0.164u 0.306s 0:03.82 12.0%     0+0k 112+648io 1pf+0w
 >error: command   /homes/fc-up201202493/WIEN2k_19.1/lapw1para -up 
uplapw1.def   failed
 >
 >>   stop error
 >
 >Inspecting the error files I find that the error printed to uplapw1.error 
is:
 >
 >**  Error in Parallel LAPW1
 >**  LAPW1 STOPPED at Mon Jul 22 14:49:39 WEST 2019
 >**  check ERROR FILES!
 > 'INILPW' - can't open unit:  18                                          
                                                   
           
 > 'INILPW' -        filename: TiC.vspup                                    
                                                   
         
 > 'INILPW' -          status: old          form: formatted                 
                                                  
        
 > 'LAPW1' - INILPW aborted unsuccessfully.
 > 'INILPW' - can't open unit:  18                                          
                                                   
           
 > 'INILPW' -        filename: TiC.vspup                                    
                                                   
         
 > 'INILPW' -          status: old          form: formatted                 
                                                  
        
 > 'LAPW1' - INILPW aborted unsuccessfully.
 >
 >As this error message on previous posts to the mailing lists is often 
pointed out as being due to running init_lapw for a non
 spin-polarized case
 >and then using runsp_lapw I should clarify that this also occurs when 
attempting to run a non spin-polarized case and instead
 of TiC.vspup it
 >changes to TiC.vsp in the error message.
 >I should point out, for it may be related to this issue that serial runs 
have the problem that after I perform my first
 simulation on a folder if I
 >first start with a spin-polarized case and then do another init_lapw for 
non spin-polarized and attempt to do run_lapw I get
 the errors as in before
 >of "can't open unit: 18" (this also occurs if I first run a non 
spin-polarized simulation and then attempt to do a
 spin-polarized one on the same
 >folder). The workaround I found for this was making a new folder, but 
since the error message is also related to
 TiC.vsp/vspup I thought I would
 >point it out still.
 

Re: [Wien] Parallel run problems with version 19.1

2019-07-22 Thread Ricardo Moreira
Hi and thanks for the reply,

Regarding serial calculations, yes in both non spin-polarized and
spin-polarized everything runs properly in the cases you described. As for
parallel, it fails in both cases, with the error I indicated in my previous
email.

Best Regards,
Ricardo Moreira

On Mon, 22 Jul 2019 at 16:09,  wrote:

> Hi,
>
> What you should never do is to mix spin-polarized and
> non-spin-polarized is the same directory.
>
> Since Your explanations about spin-polarized/non-spin-polarized are a
> bit confusing, the question is:
>
> Does the calculation run properly (in parallel and serial) if everything
> (init_lapw and run_lapw) in a directory is done from the beginning in
> non-spin-polarized? Same question with spin-polarized.
>
> F. Tran
>
> On Monday 2019-07-22 16:37, Ricardo Moreira wrote:
>
> >Date: Mon, 22 Jul 2019 16:37:30
> >From: Ricardo Moreira 
> >Reply-To: A Mailing list for WIEN2k users <
> wien@zeus.theochem.tuwien.ac.at>
> >To: wien@zeus.theochem.tuwien.ac.at
> >Subject: [Wien] Parallel run problems with version 19.1
> >
> >Dear Wien2k users,
> >I am running Wien2k on a computer cluster, compiled with the GNU
> compilers version 7.2.3, OpenMPI with the operating system Scientific Linux
> release
> >7.4. Since changing from version 18.2 to 19.1 I've been unable to run
> Wien2k in parallel (neither mpi or simple k-parallel seem to work), with
> >calculations aborting with the following message:
> >
> >start   (Mon Jul 22 14:49:31 WEST 2019) with lapw0 (40/99 to go)
> >
> >cycle 1 (Mon Jul 22 14:49:31 WEST 2019) (40/99 to go)
> >
> >>   lapw0   -p  (14:49:31) starting parallel lapw0 at Mon Jul 22 14:49:31
> WEST 2019
> > .machine0 : 8 processors
> >0.058u 0.160s 0:03.50 6.0%  0+0k 48+344io 5pf+0w
> >>   lapw1  -up -p   (14:49:35) starting parallel lapw1 at Mon Jul 22
> 14:49:35 WEST 2019
> >->  starting parallel LAPW1 jobs at Mon Jul 22 14:49:35 WEST 2019
> >running LAPW1 in parallel mode (using .machines)
> >2 number_of_parallel_jobs
> > ava01 ava01 ava01 ava01(8)  ava21 ava21 ava21 ava21(8)
>  Summary of lapw1para:
> >   ava01 k=8 user=0  wallclock=0
> >   ava21 k=16user=0  wallclock=0
> >**  LAPW1 crashed!
> >0.164u 0.306s 0:03.82 12.0% 0+0k 112+648io 1pf+0w
> >error: command   /homes/fc-up201202493/WIEN2k_19.1/lapw1para -up
> uplapw1.def   failed
> >
> >>   stop error
> >
> >Inspecting the error files I find that the error printed to uplapw1.error
> is:
> >
> >**  Error in Parallel LAPW1
> >**  LAPW1 STOPPED at Mon Jul 22 14:49:39 WEST 2019
> >**  check ERROR FILES!
> > 'INILPW' - can't open unit:  18
>
> > 'INILPW' -filename: TiC.vspup
>
> > 'INILPW' -  status: old  form: formatted
>
> > 'LAPW1' - INILPW aborted unsuccessfully.
> > 'INILPW' - can't open unit:  18
>
> > 'INILPW' -filename: TiC.vspup
>
> > 'INILPW' -  status: old  form: formatted
>
> > 'LAPW1' - INILPW aborted unsuccessfully.
> >
> >As this error message on previous posts to the mailing lists is often
> pointed out as being due to running init_lapw for a non spin-polarized case
> >and then using runsp_lapw I should clarify that this also occurs when
> attempting to run a non spin-polarized case and instead of TiC.vspup it
> >changes to TiC.vsp in the error message.
> >I should point out, for it may be related to this issue that serial runs
> have the problem that after I perform my first simulation on a folder if I
> >first start with a spin-polarized case and then do another init_lapw for
> non spin-polarized and attempt to do run_lapw I get the errors as in before
> >of "can't open unit: 18" (this also occurs if I first run a non
> spin-polarized simulation and then attempt to do a spin-polarized one on
> the same
> >folder). The workaround I found for this was making a new folder, but
> since the error message is also related to TiC.vsp/vspup I thought I would
> >point it out still.
> >Lastly, I should mention that I deleted the line "15,'$file.tmp$updn',
> 'scratch','unformatted',0" from x_lapw as I previously had an error in
> >lapw2 reported elsewhere on the mailing list, that Professor Blaha
> indicated was solved by deleting the aforementioned line (and indeed it
> was).
> >Whether or not this could possibly be related to the issues I'm having
> now, I have no idea, so I felt it right to point out.
> >Thanks in advance for any assistance that might be provided.
> >
> >Best Regards,
> >Ricardo Moreira
> >
> >___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  

Re: [Wien] Parallel run problems with version 19.1

2019-07-22 Thread tran

Hi,

What you should never do is to mix spin-polarized and
non-spin-polarized is the same directory.

Since Your explanations about spin-polarized/non-spin-polarized are a
bit confusing, the question is:

Does the calculation run properly (in parallel and serial) if everything
(init_lapw and run_lapw) in a directory is done from the beginning in
non-spin-polarized? Same question with spin-polarized.

F. Tran

On Monday 2019-07-22 16:37, Ricardo Moreira wrote:


Date: Mon, 22 Jul 2019 16:37:30
From: Ricardo Moreira 
Reply-To: A Mailing list for WIEN2k users 
To: wien@zeus.theochem.tuwien.ac.at
Subject: [Wien] Parallel run problems with version 19.1

Dear Wien2k users,
I am running Wien2k on a computer cluster, compiled with the GNU compilers 
version 7.2.3, OpenMPI with the operating system Scientific Linux release
7.4. Since changing from version 18.2 to 19.1 I've been unable to run Wien2k in 
parallel (neither mpi or simple k-parallel seem to work), with
calculations aborting with the following message:

    start       (Mon Jul 22 14:49:31 WEST 2019) with lapw0 (40/99 to go)

    cycle 1     (Mon Jul 22 14:49:31 WEST 2019)         (40/99 to go)


  lapw0   -p  (14:49:31) starting parallel lapw0 at Mon Jul 22 14:49:31 WEST 
2019

 .machine0 : 8 processors
0.058u 0.160s 0:03.50 6.0%      0+0k 48+344io 5pf+0w

  lapw1  -up -p       (14:49:35) starting parallel lapw1 at Mon Jul 22 14:49:35 
WEST 2019

->  starting parallel LAPW1 jobs at Mon Jul 22 14:49:35 WEST 2019
running LAPW1 in parallel mode (using .machines)
2 number_of_parallel_jobs
     ava01 ava01 ava01 ava01(8)      ava21 ava21 ava21 ava21(8)    Summary of 
lapw1para:
   ava01         k=8     user=0  wallclock=0
   ava21         k=16    user=0  wallclock=0
**  LAPW1 crashed!
0.164u 0.306s 0:03.82 12.0%     0+0k 112+648io 1pf+0w
error: command   /homes/fc-up201202493/WIEN2k_19.1/lapw1para -up uplapw1.def   
failed


  stop error


Inspecting the error files I find that the error printed to uplapw1.error is:

**  Error in Parallel LAPW1
**  LAPW1 STOPPED at Mon Jul 22 14:49:39 WEST 2019
**  check ERROR FILES!
 'INILPW' - can't open unit:  18                                                
                                                        
 'INILPW' -        filename: TiC.vspup                                          
                                                      
 'INILPW' -          status: old          form: formatted                       
                                                    
 'LAPW1' - INILPW aborted unsuccessfully.
 'INILPW' - can't open unit:  18                                                
                                                        
 'INILPW' -        filename: TiC.vspup                                          
                                                      
 'INILPW' -          status: old          form: formatted                       
                                                    
 'LAPW1' - INILPW aborted unsuccessfully.

As this error message on previous posts to the mailing lists is often pointed 
out as being due to running init_lapw for a non spin-polarized case
and then using runsp_lapw I should clarify that this also occurs when 
attempting to run a non spin-polarized case and instead of TiC.vspup it
changes to TiC.vsp in the error message.
I should point out, for it may be related to this issue that serial runs have 
the problem that after I perform my first simulation on a folder if I
first start with a spin-polarized case and then do another init_lapw for non 
spin-polarized and attempt to do run_lapw I get the errors as in before
of "can't open unit: 18" (this also occurs if I first run a non spin-polarized 
simulation and then attempt to do a spin-polarized one on the same
folder). The workaround I found for this was making a new folder, but since the 
error message is also related to TiC.vsp/vspup I thought I would
point it out still.
Lastly, I should mention that I deleted the line "15,'$file.tmp$updn',       
'scratch','unformatted',0" from x_lapw as I previously had an error in
lapw2 reported elsewhere on the mailing list, that Professor Blaha indicated 
was solved by deleting the aforementioned line (and indeed it was).
Whether or not this could possibly be related to the issues I'm having now, I 
have no idea, so I felt it right to point out.
Thanks in advance for any assistance that might be provided.

Best Regards,
Ricardo Moreira

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] Parallel run problems with version 19.1

2019-07-22 Thread Ricardo Moreira
Dear Wien2k users,

I am running Wien2k on a computer cluster, compiled with the GNU compilers
version 7.2.3, OpenMPI with the operating system Scientific Linux release
7.4. Since changing from version 18.2 to 19.1 I've been unable to run
Wien2k in parallel (neither mpi or simple k-parallel seem to work), with
calculations aborting with the following message:

start   (Mon Jul 22 14:49:31 WEST 2019) with lapw0 (40/99 to go)

cycle 1 (Mon Jul 22 14:49:31 WEST 2019) (40/99 to go)

>   lapw0   -p  (14:49:31) starting parallel lapw0 at Mon Jul 22 14:49:31
WEST 2019
 .machine0 : 8 processors
0.058u 0.160s 0:03.50 6.0%  0+0k 48+344io 5pf+0w
>   lapw1  -up -p   (14:49:35) starting parallel lapw1 at Mon Jul 22
14:49:35 WEST 2019
->  starting parallel LAPW1 jobs at Mon Jul 22 14:49:35 WEST 2019
running LAPW1 in parallel mode (using .machines)
2 number_of_parallel_jobs
 ava01 ava01 ava01 ava01(8)  ava21 ava21 ava21 ava21(8)Summary
of lapw1para:
   ava01 k=8 user=0  wallclock=0
   ava21 k=16user=0  wallclock=0
**  LAPW1 crashed!
0.164u 0.306s 0:03.82 12.0% 0+0k 112+648io 1pf+0w
error: command   /homes/fc-up201202493/WIEN2k_19.1/lapw1para -up
uplapw1.def   failed

>   stop error

Inspecting the error files I find that the error printed to uplapw1.error
is:

**  Error in Parallel LAPW1
**  LAPW1 STOPPED at Mon Jul 22 14:49:39 WEST 2019
**  check ERROR FILES!
 'INILPW' - can't open unit:  18

 'INILPW' -filename: TiC.vspup

 'INILPW' -  status: old  form: formatted

 'LAPW1' - INILPW aborted unsuccessfully.
 'INILPW' - can't open unit:  18

 'INILPW' -filename: TiC.vspup

 'INILPW' -  status: old  form: formatted

 'LAPW1' - INILPW aborted unsuccessfully.

As this error message on previous posts to the mailing lists is often
pointed out as being due to running init_lapw for a non spin-polarized case
and then using runsp_lapw I should clarify that this also occurs when
attempting to run a non spin-polarized case and instead of TiC.vspup it
changes to TiC.vsp in the error message.
I should point out, for it may be related to this issue that serial runs
have the problem that after I perform my first simulation on a folder if I
first start with a spin-polarized case and then do another init_lapw for
non spin-polarized and attempt to do run_lapw I get the errors as in before
of "can't open unit: 18" (this also occurs if I first run a non
spin-polarized simulation and then attempt to do a spin-polarized one on
the same folder). The workaround I found for this was making a new folder,
but since the error message is also related to TiC.vsp/vspup I thought I
would point it out still.
Lastly, I should mention that I deleted the line "15,'$file.tmp$updn',
  'scratch','unformatted',0" from x_lapw as I previously had an error in
lapw2 reported elsewhere on the mailing list, that Professor Blaha
indicated was solved by deleting the aforementioned line (and indeed it
was). Whether or not this could possibly be related to the issues I'm
having now, I have no idea, so I felt it right to point out.
Thanks in advance for any assistance that might be provided.

Best Regards,
Ricardo Moreira
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] (no subject)

2019-07-22 Thread tran

In lapwso.def, the files used by the lapwso program are listed.
Are all these files present in your directory and not empty?

F. Tran

On Monday 2019-07-22 15:23, Aamir Shafique wrote:


Date: Mon, 22 Jul 2019 15:23:53
From: Aamir Shafique 
Reply-To: A Mailing list for WIEN2k users 
To: wien@zeus.theochem.tuwien.ac.at
Subject: [Wien] (no subject)


Hello,
I have installed Wien2k 17.1 with compiler intel-18.0.1.

- Normal SCF cycles completed successfully using SCAN meta-GGA or PBE-GGA.

- SOC was initiated from the command line using initso_lapw
but I got the following error message:
lapwso lapwso.def   failed


  stop error


With Best Regards,

 

Aamir Shafique

-
Postdoctoral fellow

Physical Sciences and Engineering

+ King Abdullah University of Science and Technology

Thuwal 23955, Saudia Arabia.

( Direct - +966  54  5351602

* Email – aamir.shafi...@kaust.edu.sa


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] (no subject)

2019-07-22 Thread Aamir Shafique
Hello,
I have installed Wien2k 17.1 with compiler intel-18.0.1.

- Normal SCF cycles completed successfully using SCAN meta-GGA or PBE-GGA.


- SOC was initiated from the command line using initso_lapw
but I got the following error message:
lapwso lapwso.def   failed

>   stop error

*With Best Regards, *




*Aamir Shafique*

*-*
*Postdoctoral fellow*

*Physical Sciences and Engineering*

*+* *King Abdullah University of Science and Technology *

Thuwal 23955, Saudia Arabia.

(* Direct* - +966  54  5351602
* *Email* – aamir.shafi...@kaust.edu.sa 
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Carrier concentration in BoltzTrap

2019-07-22 Thread Gavin Abo

See BoltzTraP Users group on the webpage at:

https://www.imc.tuwien.ac.at/forschungsbereich_theoretische_chemie/forschungsgruppen/prof_dr_gkh_madsen_theoretical_materials_chemistry/boltztrap2/

You likely have to ask your question in that other mailing list to get a 
response to that question.


On 7/22/2019 12:59 AM, mitra narimani wrote:

Hello dear Wien2k users
  I want obtain the changes of Seebeck coefficient versus temperature in the 
special carrier concentration. Does anybody know how to get the carrier 
concentration (n or p) in cm-3
for the strucure and what changes must be done in case. intranse for this 
purpose. please help me.
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] Carrier concentration in BoltzTrap

2019-07-22 Thread mitra narimani
Hello dear Wien2k users

 I want obtain the changes of Seebeck coefficient versus temperature
in the special carrier concentration. Does anybody know how to get the
carrier concentration (n or p) in cm-3
for the strucure and what changes must be done in case. intranse for
this purpose. please help me.
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html