[Wien] Segmentation fault in Supercell Calculation

2015-07-28 Thread Lan, Wangwei
Dear WIEN2k user:


I am using wien2k_14.2 on CentOS release 5.8. ifort version 12.1.3 with MKL.



After generating a 2x2x1 supercell with 30 atoms, I tried to do the scf 
calculation. However, I got some errors. I'v attached it at the end of this 
email. My wien2k was installed correctly. It works well for other calculations. 
It also worked if I run non-parallel calculation for supercell. I'v searched 
the mail-list, but can't find any solutions. Could you give me a hint on how to 
solve the problem? Thank you very much.



Sincerely

Wangwei Lan



On lapw0.error shows:



'Unknown' - SIGSEGV



On super.dayfile shows:


Child id   0 SIGSEGV

 Child id   8 SIGSEGV

 Child id  18 SIGSEGV

 Child id  23 SIGSEGV

 Child id  17 SIGSEGV




On Screen shows:

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

--

MPI_ABORT was invoked on rank 18 in communicator MPI_COMM_WORLD

with errorcode 451782144.


NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.

You may or may not see output from other processes, depending on

exactly when Open MPI kills them.

--

--

mpirun has exited due to process rank 18 with PID 26388 on

node corfu.magnet.fsu.edu exiting without calling "finalize". This may

have caused other processes in the application to be

terminated by signals sent by mpirun (as reported here).

--

[corfu.magnet.fsu.edu:26369] 23 more processes have sent help message 
help-mpi-api.txt / mpi-abort

[corfu.magnet.fsu.edu:26369] Set MCA parameter "orte_base_help_aggregate" to 0 
to see all help / error messages


>   stop error


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Segmentation fault in Supercell Calculation

2015-07-28 Thread Lan, Wangwei
Dear Professor Marks:


I've check everything you have mentioned, they are all fine, nevertheless it 
still don't work. I think the input files are ok since I have no problem 
running in non-parallel mode.

I tried to make the supercell smaller (2x1x1), then it works. However, I don't 
know why that happens.

By the way, I have "ulimit -s unlimited " in my .bashrc file. I'v also adjusted 
the RKMAX and RMT before.


Sincerely

Wangwei Lan




From: wien-boun...@zeus.theochem.tuwien.ac.at 
 on behalf of Laurence Marks 

Sent: Tuesday, July 28, 2015 13:09
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

You have what is called a "Segmentation Violation" which was detected by 4 of 
the nodes and they called an error handler which stopped the mpi job on all the 
CPU's.

This is normally because you have an error of some sort in your input files, 
any of case.in0, case.clmsum (and clmup/dn if you are using spin polarized).

1) Check that you do not have overlapping spheres and/or other mistakes.
2) Check your error files, e.g. "cat *.error". Are any others (e.g. 
dstart.error) not empty? Did you ignore an error during setup?
3) Check the lapw0 output in case.output0* -- maybe shows what is wrong.

There are many possible sources, you have to find the specific one.


On Tue, Jul 28, 2015 at 12:57 PM, Lan, Wangwei 
mailto:wl...@my.fsu.edu>> wrote:

Dear WIEN2k user:


I am using wien2k_14.2 on CentOS release 5.8. ifort version 12.1.3 with MKL.



After generating a 2x2x1 supercell with 30 atoms, I tried to do the scf 
calculation. However, I got some errors. I'v attached it at the end of this 
email. My wien2k was installed correctly. It works well for other calculations. 
It also worked if I run non-parallel calculation for supercell. I'v searched 
the mail-list, but can't find any solutions. Could you give me a hint on how to 
solve the problem? Thank you very much.



Sincerely

Wangwei Lan



On lapw0.error shows:



'Unknown' - SIGSEGV



On super.dayfile shows:


Child id   0 SIGSEGV

 Child id   8 SIGSEGV

 Child id  18 SIGSEGV

 Child id  23 SIGSEGV

 Child id  17 SIGSEGV




On Screen shows:

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

--

MPI_ABORT was invoked on rank 18 in communicator MPI_COMM_WORLD

with errorcode 451782144.


NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.

You may or may not see output from other processes, depending on

exactly when Open MPI kills them.

--

--

mpirun has exited due to process rank 18 with PID 26388 on

node corfu.magnet.fsu.edu<http://corfu.magnet.fsu.edu> exiting without calling 
"finalize". This may

have caused other processes in the application to be

terminated by signals sent by mpirun (as reported here).

--

[corfu.magnet.fsu.edu:26369<http://corfu.magnet.fsu.edu:26369>] 23 more 
processes have sent help message help-mpi-api.txt / mpi-abort

[corfu.magnet.fsu.edu:26369<http://corfu.magnet.fsu.edu:26369>] Set MCA 
parameter "orte_base_help_aggregate" to 0 to see all help / error messages


>   stop error





--
Professor Laurence Marks
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu<http://www.numis.northwestern.edu>
Corrosion in 4D: 
MURI4D.numis.northwestern.edu<http://MURI4D.numis.northwestern.edu>
Co-Editor, Acta Cryst A
"Research is to see what everybody else has seen, and to think what nobody else 
has thought"
Albert Szent-Gyorgi
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Segmentation fault in Supercell Calculation

2015-07-28 Thread Lan, Wangwei
Dear Professor:


Yes, "x lapw0" works without mpi.


My mpi compile : mpif90

I use Open MPI, version 1.4.5

the parallel compilation options are

-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io

I use Intel MKL libraries, that part should be fine.


Thanks very much for your help.

Sincerely
Wangwei Lan

From: wien-boun...@zeus.theochem.tuwien.ac.at 
 on behalf of Laurence Marks 

Sent: Tuesday, July 28, 2015 14:30
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

Does a simple "x lapw0" work, i.e. without mpi, for this specific case?

If it does then there is probably an error in how you have linked/compiled the 
mpi versions. Please provide:

a) The mpi compiler you used.
b) Which type of mpi you are using (openmpi, mvapich, intel mpi etc)
c) The parallel compilation options.

N.B., a useful resource is 
https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor

N.N.B., ulimit -s is not needed, this is (now) done in the software.

On Tue, Jul 28, 2015 at 2:22 PM, Lan, Wangwei 
mailto:wl...@my.fsu.edu>> wrote:

Dear Professor Marks:


I've check everything you have mentioned, they are all fine, nevertheless it 
still don't work. I think the input files are ok since I have no problem 
running in non-parallel mode.

I tried to make the supercell smaller (2x1x1), then it works. However, I don't 
know why that happens.

By the way, I have "ulimit -s unlimited " in my .bashrc file. I'v also adjusted 
the RKMAX and RMT before.


Sincerely

Wangwei Lan




From: 
wien-boun...@zeus.theochem.tuwien.ac.at<mailto:wien-boun...@zeus.theochem.tuwien.ac.at>
 
mailto:wien-boun...@zeus.theochem.tuwien.ac.at>>
 on behalf of Laurence Marks 
mailto:l-ma...@northwestern.edu>>
Sent: Tuesday, July 28, 2015 13:09
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

You have what is called a "Segmentation Violation" which was detected by 4 of 
the nodes and they called an error handler which stopped the mpi job on all the 
CPU's.

This is normally because you have an error of some sort in your input files, 
any of case.in0, case.clmsum (and clmup/dn if you are using spin polarized).

1) Check that you do not have overlapping spheres and/or other mistakes.
2) Check your error files, e.g. "cat *.error". Are any others (e.g. 
dstart.error) not empty? Did you ignore an error during setup?
3) Check the lapw0 output in case.output0* -- maybe shows what is wrong.

There are many possible sources, you have to find the specific one.


On Tue, Jul 28, 2015 at 12:57 PM, Lan, Wangwei 
mailto:wl...@my.fsu.edu>> wrote:

Dear WIEN2k user:


I am using wien2k_14.2 on CentOS release 5.8. ifort version 12.1.3 with MKL.



After generating a 2x2x1 supercell with 30 atoms, I tried to do the scf 
calculation. However, I got some errors. I'v attached it at the end of this 
email. My wien2k was installed correctly. It works well for other calculations. 
It also worked if I run non-parallel calculation for supercell. I'v searched 
the mail-list, but can't find any solutions. Could you give me a hint on how to 
solve the problem? Thank you very much.



Sincerely

Wangwei Lan



On lapw0.error shows:



'Unknown' - SIGSEGV



On super.dayfile shows:


Child id   0 SIGSEGV

 Child id   8 SIGSEGV

 Child id  18 SIGSEGV

 Child id  23 SIGSEGV

 Child id  17 SIGSEGV




On Screen shows:

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

--

MPI_ABORT was invoked on rank 18 in communicator MPI_COMM_WORLD

with errorcode 451782144.


NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.

You may or may not see output from other processes, depending on

exactly when Open MPI kills them.

--

--

mpirun has exited due to process rank 18 with PID 26388 on

node corfu.magnet.fsu.edu<http://corfu.magnet.fsu.edu> exiting without calling 
"finalize". This may

have caused other processes in the application to be

terminated by signals sent by mpirun (as reported here).

--

[corfu.magnet.fsu.edu:26369<http://corfu.magnet.fsu.edu:26369>] 23

Re: [Wien] Segmentation fault in Supercell Calculation

2015-07-28 Thread Lan, Wangwei
Dear sir:


I don't quite understand how you did to solve the problem. Do you mean when you 
encounter this problem. Just run init_lapw again then scf calculation should 
work?

Thanks very much.



Sincerely

Wangwei Lan



From: wien-boun...@zeus.theochem.tuwien.ac.at 
 on behalf of sikander Azam 

Sent: Tuesday, July 28, 2015 14:36
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation


Dear sir,
Mostly I also get the same problem, what I do, leave the struct . file and 
again do the initialization and then it's run well.
Regards

On 28 Jul 2015 21:25, "Lan, Wangwei" 
mailto:wl...@my.fsu.edu>> wrote:

Dear Professor Marks:


I've check everything you have mentioned, they are all fine, nevertheless it 
still don't work. I think the input files are ok since I have no problem 
running in non-parallel mode.

I tried to make the supercell smaller (2x1x1), then it works. However, I don't 
know why that happens.

By the way, I have "ulimit -s unlimited " in my .bashrc file. I'v also adjusted 
the RKMAX and RMT before.


Sincerely

Wangwei Lan




From: 
wien-boun...@zeus.theochem.tuwien.ac.at<mailto:wien-boun...@zeus.theochem.tuwien.ac.at>
 
mailto:wien-boun...@zeus.theochem.tuwien.ac.at>>
 on behalf of Laurence Marks 
mailto:l-ma...@northwestern.edu>>
Sent: Tuesday, July 28, 2015 13:09
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

You have what is called a "Segmentation Violation" which was detected by 4 of 
the nodes and they called an error handler which stopped the mpi job on all the 
CPU's.

This is normally because you have an error of some sort in your input files, 
any of case.in0, case.clmsum (and clmup/dn if you are using spin polarized).

1) Check that you do not have overlapping spheres and/or other mistakes.
2) Check your error files, e.g. "cat *.error". Are any others (e.g. 
dstart.error) not empty? Did you ignore an error during setup?
3) Check the lapw0 output in case.output0* -- maybe shows what is wrong.

There are many possible sources, you have to find the specific one.


On Tue, Jul 28, 2015 at 12:57 PM, Lan, Wangwei 
mailto:wl...@my.fsu.edu>> wrote:

Dear WIEN2k user:


I am using wien2k_14.2 on CentOS release 5.8. ifort version 12.1.3 with MKL.



After generating a 2x2x1 supercell with 30 atoms, I tried to do the scf 
calculation. However, I got some errors. I'v attached it at the end of this 
email. My wien2k was installed correctly. It works well for other calculations. 
It also worked if I run non-parallel calculation for supercell. I'v searched 
the mail-list, but can't find any solutions. Could you give me a hint on how to 
solve the problem? Thank you very much.



Sincerely

Wangwei Lan



On lapw0.error shows:



'Unknown' - SIGSEGV



On super.dayfile shows:


Child id   0 SIGSEGV

 Child id   8 SIGSEGV

 Child id  18 SIGSEGV

 Child id  23 SIGSEGV

 Child id  17 SIGSEGV




On Screen shows:

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

w2k_dispatch_signal(): received: Segmentation fault

--

MPI_ABORT was invoked on rank 18 in communicator MPI_COMM_WORLD

with errorcode 451782144.


NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.

You may or may not see output from other processes, depending on

exactly when Open MPI kills them.

--

--

mpirun has exited due to process rank 18 with PID 26388 on

node corfu.magnet.fsu.edu<http://corfu.magnet.fsu.edu> exiting without calling 
"finalize". This may

have caused other processes in the application to be

terminated by signals sent by mpirun (as reported here).

--

[corfu.magnet.fsu.edu:26369<http://corfu.magnet.fsu.edu:26369>] 23 more 
processes have sent help message help-mpi-api.txt / mpi-abort

[corfu.magnet.fsu.edu:26369<http://corfu.magnet.fsu.edu:26369>] Set MCA 
parameter "orte_base_help_aggregate" to 0 to see all help / error messages


>   stop error





--
Professor Laurence Marks
Department of Materials Science and Engineering
Northwestern University
www.numis.northwestern.edu<http://www.numis.northwestern.edu>
Corrosion in 4D: 
MURI4D.numis.northwestern.ed

Re: [Wien] Segmentation fault in Supercell Calculation

2015-07-28 Thread Lan, Wangwei
Dear Professor:



When I type "mpif90 --version", it give me " ifort (IFORT) 12.1.3 20120212". 
So, I thought it should work.


My Libraries linking are listed below:

Parallel excution:


 FFTW_LIB + FFTW_OPT: -lfftw3_mpi -lfftw3 -L/opt/fftw3.3.3/lib  +  
-DFFTW3 -I/opt/fftw3.3.3/include (already set)
 RP  RP_LIB(SCALAPACK+PBLAS): -lmkl_scalapack_lp64 -lmkl_blacs_lp64 
$(R_LIBS)
 FP  FPOPT(par.comp.options): -FR -mp1 -w -prec_div -pc80 -pad -ip 
-DINTEL_VML -traceback -assume buffered_io

Compiler Option

 O   Compiler options:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML 
-traceback -assume buffered_io -C -g
 F   FFTW options:-DFFTW3 -I/opt/fftw3.3.3/include
 L   Linker Flags:$(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) 
-pthread
 P   Preprocessor flags   '-DParallel'
 R   R_LIB (LAPACK+BLAS): -lmkl_lapack95_lp64 -lmkl_intel_lp64 
-lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lmkl_solver_lp64
 FL  FFTW_LIBS:   -lfftw3_mpi -lfftw3 -L/opt/fftw3.3.3/lib



Sincerely
Wangwei


From: wien-boun...@zeus.theochem.tuwien.ac.at 
 on behalf of Laurence Marks 

Sent: Tuesday, July 28, 2015 14:59
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

Your options are probably wrong:

a) mpif90 is normally gfortran, the Intel version is mpiifort
b) It is easy to use the wrong linking with the Intel mkl libraries. Please 
provide the information I requested.


On Tue, Jul 28, 2015 at 2:55 PM, Lan, Wangwei 
mailto:wl...@my.fsu.edu>> wrote:

Dear Professor:


Yes, "x lapw0" works without mpi.


My mpi compile : mpif90

I use Open MPI, version 1.4.5

the parallel compilation options are

-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io

I use Intel MKL libraries, that part should be fine.


Thanks very much for your help.

Sincerely
Wangwei Lan

From: 
wien-boun...@zeus.theochem.tuwien.ac.at<mailto:wien-boun...@zeus.theochem.tuwien.ac.at>
 
mailto:wien-boun...@zeus.theochem.tuwien.ac.at>>
 on behalf of Laurence Marks 
mailto:l-ma...@northwestern.edu>>
Sent: Tuesday, July 28, 2015 14:30
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

Does a simple "x lapw0" work, i.e. without mpi, for this specific case?

If it does then there is probably an error in how you have linked/compiled the 
mpi versions. Please provide:

a) The mpi compiler you used.
b) Which type of mpi you are using (openmpi, mvapich, intel mpi etc)
c) The parallel compilation options.

N.B., a useful resource is 
https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor

N.N.B., ulimit -s is not needed, this is (now) done in the software.

On Tue, Jul 28, 2015 at 2:22 PM, Lan, Wangwei 
mailto:wl...@my.fsu.edu>> wrote:

Dear Professor Marks:


I've check everything you have mentioned, they are all fine, nevertheless it 
still don't work. I think the input files are ok since I have no problem 
running in non-parallel mode.

I tried to make the supercell smaller (2x1x1), then it works. However, I don't 
know why that happens.

By the way, I have "ulimit -s unlimited " in my .bashrc file. I'v also adjusted 
the RKMAX and RMT before.


Sincerely

Wangwei Lan




From: 
wien-boun...@zeus.theochem.tuwien.ac.at<mailto:wien-boun...@zeus.theochem.tuwien.ac.at>
 
mailto:wien-boun...@zeus.theochem.tuwien.ac.at>>
 on behalf of Laurence Marks 
mailto:l-ma...@northwestern.edu>>
Sent: Tuesday, July 28, 2015 13:09
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

You have what is called a "Segmentation Violation" which was detected by 4 of 
the nodes and they called an error handler which stopped the mpi job on all the 
CPU's.

This is normally because you have an error of some sort in your input files, 
any of case.in0, case.clmsum (and clmup/dn if you are using spin polarized).

1) Check that you do not have overlapping spheres and/or other mistakes.
2) Check your error files, e.g. "cat *.error". Are any others (e.g. 
dstart.error) not empty? Did you ignore an error during setup?
3) Check the lapw0 output in case.output0* -- maybe shows what is wrong.

There are many possible sources, you have to find the specific one.


On Tue, Jul 28, 2015 at 12:57 PM, Lan, Wangwei 
mailto:wl...@my.fsu.edu>> wrote:

Dear WIEN2k user:


I am using wien2k_14.2 on CentOS release 5.8. ifort version 12.1.3 with MKL.



After generating a 2x2x1 supercell with 30 atoms, I tried to do the scf 
calculation. However, I got some errors. I'v attached it at the end of this 
email. My wien2k was installed correctly. It works well for other calculations. 
It also worked if I run non-

Re: [Wien] Segmentation fault in Supercell Calculation

2015-07-28 Thread Lan, Wangwei
Dear professor:


I use Open MPI, version 1.4.5.


I added  "-C -g" because some people in the mail-list said it probably will 
solve the problem.

Thanks for your advice, I will recompile the package soon.

Sincerely
Wangwei

From: wien-boun...@zeus.theochem.tuwien.ac.at 
 on behalf of Laurence Marks 

Sent: Tuesday, July 28, 2015 15:36
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

N.B., unless you are a code developer "-C -g" are a terrible idea. Remove them, 
they may easily lead to the code crashing. Replace them by just "-O1"

On Tue, Jul 28, 2015 at 3:28 PM, Lan, Wangwei 
mailto:wl...@my.fsu.edu>> wrote:

Dear Professor:



When I type "mpif90 --version", it give me " ifort (IFORT) 12.1.3 20120212". 
So, I thought it should work.


My Libraries linking are listed below:

Parallel excution:


 FFTW_LIB + FFTW_OPT: -lfftw3_mpi -lfftw3 -L/opt/fftw3.3.3/lib  +  
-DFFTW3 -I/opt/fftw3.3.3/include (already set)
 RP  RP_LIB(SCALAPACK+PBLAS): -lmkl_scalapack_lp64 -lmkl_blacs_lp64 
$(R_LIBS)
 FP  FPOPT(par.comp.options): -FR -mp1 -w -prec_div -pc80 -pad -ip 
-DINTEL_VML -traceback -assume buffered_io

Compiler Option

 O   Compiler options:-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML 
-traceback -assume buffered_io -C -g
 F   FFTW options:-DFFTW3 -I/opt/fftw3.3.3/include
 L   Linker Flags:$(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) 
-pthread
 P   Preprocessor flags   '-DParallel'
 R   R_LIB (LAPACK+BLAS): -lmkl_lapack95_lp64 -lmkl_intel_lp64 
-lmkl_intel_thread -lmkl_core -liomp5 -lpthread -lmkl_solver_lp64
 FL  FFTW_LIBS:   -lfftw3_mpi -lfftw3 -L/opt/fftw3.3.3/lib



Sincerely
Wangwei


From: 
wien-boun...@zeus.theochem.tuwien.ac.at<mailto:wien-boun...@zeus.theochem.tuwien.ac.at>
 
mailto:wien-boun...@zeus.theochem.tuwien.ac.at>>
 on behalf of Laurence Marks 
mailto:l-ma...@northwestern.edu>>
Sent: Tuesday, July 28, 2015 14:59
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

Your options are probably wrong:

a) mpif90 is normally gfortran, the Intel version is mpiifort
b) It is easy to use the wrong linking with the Intel mkl libraries. Please 
provide the information I requested.


On Tue, Jul 28, 2015 at 2:55 PM, Lan, Wangwei 
mailto:wl...@my.fsu.edu>> wrote:

Dear Professor:


Yes, "x lapw0" works without mpi.


My mpi compile : mpif90

I use Open MPI, version 1.4.5

the parallel compilation options are

-FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io

I use Intel MKL libraries, that part should be fine.


Thanks very much for your help.

Sincerely
Wangwei Lan

From: 
wien-boun...@zeus.theochem.tuwien.ac.at<mailto:wien-boun...@zeus.theochem.tuwien.ac.at>
 
mailto:wien-boun...@zeus.theochem.tuwien.ac.at>>
 on behalf of Laurence Marks 
mailto:l-ma...@northwestern.edu>>
Sent: Tuesday, July 28, 2015 14:30
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

Does a simple "x lapw0" work, i.e. without mpi, for this specific case?

If it does then there is probably an error in how you have linked/compiled the 
mpi versions. Please provide:

a) The mpi compiler you used.
b) Which type of mpi you are using (openmpi, mvapich, intel mpi etc)
c) The parallel compilation options.

N.B., a useful resource is 
https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor

N.N.B., ulimit -s is not needed, this is (now) done in the software.

On Tue, Jul 28, 2015 at 2:22 PM, Lan, Wangwei 
mailto:wl...@my.fsu.edu>> wrote:

Dear Professor Marks:


I've check everything you have mentioned, they are all fine, nevertheless it 
still don't work. I think the input files are ok since I have no problem 
running in non-parallel mode.

I tried to make the supercell smaller (2x1x1), then it works. However, I don't 
know why that happens.

By the way, I have "ulimit -s unlimited " in my .bashrc file. I'v also adjusted 
the RKMAX and RMT before.


Sincerely

Wangwei Lan




From: 
wien-boun...@zeus.theochem.tuwien.ac.at<mailto:wien-boun...@zeus.theochem.tuwien.ac.at>
 
mailto:wien-boun...@zeus.theochem.tuwien.ac.at>>
 on behalf of Laurence Marks 
mailto:l-ma...@northwestern.edu>>
Sent: Tuesday, July 28, 2015 13:09
To: A Mailing list for WIEN2k users
Subject: Re: [Wien] Segmentation fault in Supercell Calculation

You have what is called a "Segmentation Violation" which was detected by 4 of 
the nodes and they called an error handler which stopped the mpi job on all the 
CPU's.

This is normally because you have an error of some sort in your input files,

[Wien] crystal field splitting

2015-08-23 Thread Lan, Wangwei
Dear Wien2k user:


I am very new in WIEN2k. Now I am running case on our crystal system which 
contains a transition metal Cr. I am particularly interested in the d orbital 
splitting, the energy levels of 5 d orbitals. Does anyone know how to calculate 
the orbital splitting using WIEN2k?


I'v read several papers, they use wannier90 to calculate the on site energy, 
then interpret that on site energy difference as crystal field splitting. 
However, when I apply this method, I got controversy  results as our group 
theory analysis. I seriously doubt about this kind of interpretation, hope you 
can help me. Thanks very much.




Sincerely

Wangwei Lan
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] crystal field splitting

2015-08-24 Thread Lan, Wangwei
Dear professor Vñictor Luaña:

Thanks very much for your kindly reply. Your reply is really helpful and I will 
go on studying in this field. Thanks again.

Sincerely
Wangwei Lan


From: wien-boun...@zeus.theochem.tuwien.ac.at 
 on behalf of Víctor Luaña Cabal 

Sent: Sunday, August 23, 2015 15:37
To: A Mailing list for WIEN2k users
Cc: Victor Luaña
Subject: Re: [Wien] crystal field splitting

On Sun, Aug 23, 2015 at 07:51:33PM +, Lan, Wangwei wrote:
> Dear Wien2k user:
>
>
> I am very new in WIEN2k. Now I am running case on our crystal system
> which contains a transition metal Cr. I am particularly interested in
> the d orbital splitting, the energy levels of 5 d orbitals. Does anyone
> know how to calculate the orbital splitting using WIEN2k?

Wangwei,

The answer is not simple and there can be more than one opinion living
around. Let me express mu 0.02 euros.

Crystal field splitting parameters (delta-D, i.e. t2g-eg splitting, Racah
parameters, etc) is by fitting a model to the theoretical or experimental
true calculations of total energy diferences between correlated
electronic states. In other terms, there are no such a thing as
orbital splitting as a well defined element. The orbital approach is
a interpretative description, not a physical definition.

There are decades that I not contribute to this old subject and I
reccomend you to follow the more recent papers by Profs. Luis Seijo and
Zoila Barandiarán, from the UAM (Universidad Aotónoma de Madrid).
You will find in their work a good description of old and modern
treatments, laike MOLCAS calculations, relativity contributions, and
the huge importance of large correlation treatments. Both contribute
to the development of MOLCAS.

<http://www.uam.es/personal_pdi/ciencias/lseijo/>
<http://www.uam.es/personal_pdi/ciencias/yara/>

Notice that the field emerged from dealing with impurities within
crystals, so most of the evolution that I learned was releted to
the moleculartreatment of embedded impurities neighborhoods.

On a solid state perspective, and your mention of wannier functions
lets me thing you may prefer that, notice that d-d, d-s and d-p
transitions correspond to heavily correlated problems, and the wave
funcion perspective has a much longer tradition than TD-DFT ones,
but let me just say that I know less abiout them. The conferences
by  Stefano Baroni on the calculation of the color of natural dies
are simple awesome.

<http://stefano.baroni.me/presentations.html>

Best regards and good luck if you come new to this field,
Dr. Vñictor Luaña
--
 .  ."In science a person can be convinced by a good argument.
/ `' \   That is almost impossible in politics or religion"
   /(o)(o)\  (Adapted from Carl Sagan)
  /`. \/ .'\  "Lo mediocre es peor que lo bueno, pero también es peor
 /   '`'`   \ que lo malo, porque la mediocridad no es un grado, es una
 |  \'`'`/  | actitud" -- Jorge Wasenberg, 2015
 |  |'`'`|  | (Mediocre is worse than good, but it is also worse than
  \/`'`'`'\/  bad, because mediocrity is not a grade, it is an attitude)
===(((==)))==+=
! Dr.Víctor Luaña, in silico chemist & prof. ! I hate the bureaucracy
! Departamento de Química Física y Analítica ! imposed by companies to
! Universidad de Oviedo, 33006-Oviedo, Spain ! which I owe nothing:
! e-mail:   vic...@fluor.quimica.uniovi.es   ! amazon, ResearchGATE and
! phone: +34-985-103491  fax: +34-985-103125 ! the like.
++
 GroupPage : http://azufre.quimica.uniovi.es/
 (being reworked)
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] formation energy

2015-09-24 Thread Lan, Wangwei
Dear WIEN2k user:


I am interested in the formation energy. I've searched the mailing address, but 
I still can't get it. I found the definition of formation energy is like this 
(Ga15MnN16 for example)  :


formation energy = total ENE of Ga15MnN16 -15*total ENE for Ga metal in 
standard state structure - 1* total ENE for Mn metal in standard state 
structure - 16* total ENE for N in standard structure


In our system, it's TbOCl, does that mean:

formation energy = = total ENE of TbOCl - tot ENE for Tb metal in standard 
state structure - tot ENE for O in standard state - Cl in standard state. ?


If that is correct, what is the standard state structure for O and Cl? Do we 
need to do several other calculations for Tb metal or O standard state to get 
the formation energy?

Thanks very much.


Sincerely

Wangwei Lan
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] case.inkram energy shift

2016-03-06 Thread Lan, Wangwei
Dear WIEN2k user,


I am now doing optical properties calculation for semimetals. I have a few 
question related to the energy shift in case.inkram file. If I don't do energy 
shift, I can't get a reasonable results compared to experiments. That brings me 
into two questions.

First, what is energy shift here? How do you determine whether you need energy 
shift or not?
Second,how do you determine the exact number of shift, is it shown in some file?


Thanks in advance


Sincerely

Wangwei Lan
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


[Wien] xmgrace error

2016-04-05 Thread Lan, Wangwei
Dear WIEN2k User:


I calculate the band structure in my case, and I can get the case.spaghetti_ps 
correctly. But when I want to use xmgrace to plot the band structure, it gives 
error show below. I am pretty sure WIEN2k works fine in my case.


I searched the mailing list, but I can't find the answer for this problem. Does 
anyone have an idea on how to solve this problem? Thanks in advance.


[Error] No valid graph selected:  VIEW 0.12, 0.15, 0.90, 1.28
[Error] No valid axis selected:  XAXIS  LABEL CHAR SIZE 1.5
[Error] No valid axis selected:  XAXIS  TICKLABEL CHAR SIZE 1.25
[Error] No valid axis selected:  YAXIS  LABEL CHAR SIZE 1.5
[Error] No valid axis selected:  YAXIS  TICKLABEL CHAR SIZE 1.25
[Error] No valid axis selected:  XAXIS  TICK MAJOR GRID ON
[Error] No valid axis selected:  XAXIS  TICK SPEC TYPE BOTH
[Error] No valid axis selected:  XAXIS  TICK SPEC  11
[Error] No valid axis selected:  XAXIS  TICK MAJOR   0, 0.0
[Error] No valid axis selected:  XAXIS  TICKLABEL0 ,"GAM "
[Error] No valid axis selected:  XAXIS  TICK MAJOR   1, 0.17980
[Error] No valid axis selected:  XAXIS  TICKLABEL1 ,"Z   "
[Error] No valid axis selected:  XAXIS  TICK MAJOR   2, 0.70609
[Error] No valid axis selected:  XAXIS  TICKLABEL2 ,""
[Error] No valid axis selected:  XAXIS  TICK MAJOR   3, 1.23783
[Error] No valid axis selected:  XAXIS  TICKLABEL3 ,""
[Error] No valid axis selected:  XAXIS  TICK MAJOR   4, 1.75455
[Error] No valid axis selected:  XAXIS  TICKLABEL4 ,"L1  "
[Error] No valid axis selected:  XAXIS  TICK MAJOR   5, 2.13185
[Error] No valid axis selected:  XAXIS  TICKLABEL5 ,"B   "
[Error] No valid axis selected:  XAXIS  TICK MAJOR   6, 2.74100
[Error] No valid axis selected:  XAXIS  TICKLABEL6 ,"GAM "
[Error] No valid axis selected:  XAXIS  TICK MAJOR   7, 3.26251
[Error] No valid axis selected:  XAXIS  TICKLABEL7 ,"L   "
[Error] No valid axis selected:  XAXIS  TICK MAJOR   8, 3.57160
[Error] No valid axis selected:  XAXIS  TICKLABEL8 ,"X   "
[Error] No valid axis selected:  XAXIS  TICK MAJOR   9, 4.18743
[Error] No valid axis selected:  XAXIS  TICKLABEL9 ,""
[Error] No valid axis selected:  XAXIS  TICK MAJOR  10, 4.78405
[Error] No valid axis selected:  XAXIS  TICKLABEL   10 ,"Q   "


Best regards

Wangwei Lan




___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html