[Wien] Anu A (anuan...@gmail.com) has sent you a private message

2009-09-15 Thread anuani12
An HTML attachment was scrubbed...
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20090915/bf9e32ac/attachment.htm


[Wien] parallel compilation

2009-09-15 Thread Peter Blaha
 if you do not select fine grained parallel, siteconfig_lapw does not 
 give you options to tune parallel parameters.  does that mean that it is 
 not compiling parallel at all?  if i do not do fine tuning, i can get a 
 complete compilation. 

If one does NOT select fine grain parallel; siteconfig will not produce any
mpi-executables. This is clearly the proper choice

i) for beginners
ii) on small computers, where you may not have installed   mpi + fftw  ;
 or have only a slow network between them.
iii) when you intend to limit your unitcells to about 50 atoms/cell.

(Still k-point parallelization is possible, and this is much more efficient
than mpi for small systems).

If you select fine grain parallelization, also the mpi-executables will be
produced. But you need:

i) a properly installed mpi (+ mpi-fortran compiler, which is usually created
when mpi is installed. (Usually MPF is NOT  ifort, but   mpif90  !!)
Note: mpi2 needs some small code changes in lapw0.F
ii) a corresponding Scalapack+Blacs (also suitable for your compiler)
 (usually it comes together with the mkl)
iii) FFTW routines (compiled with the mpi option.
iv) you need to properly specify the installation place (-L) and the
 name of the libraries (-l). Also make sure you understand 32-bit vs
 64bit (emt) issues.

In addition, this code is useful ONLY if you have unitcells with more than
about 50 atoms  and you have hardware with fast network (Infiniband),
i.e. Gbit ethernet may work, but will be slow.

On a SINGLE Quadcore-computer, mpi-parallelization is probably NOT worth the 
effort
(use  OMP_NUM_THREAD 2 (or 4))


linuxif9:MPF:ifort  unlikely! (see above)



 # Linux PC system with IFC 10 compiler + mkl 10 (-ip is broken; -static 
 does not give traceback-lines)
 linuxif9:FC:ifortlinuxif9:MPF:ifort
 linuxif9:CC:cclinuxif9:FOPT:-FR -mp1 -w -prec_div  -pc80 -pad -align 
 -DINTEL_VML -traceback -O
 3 -xW
 linuxif9:FPOPT:$(FOPT) -FR -mp1 -w -prec_div -pc80 -pad -align 
 -DINTEL_VML -traceback -I/opt/mpich2/includelinuxif9:LDFLAGS:  $(FOPT) 
 -L/opt/intel/mkl/10.2.1.017/lib/em64t http://10.2.1.017/lib/em64t 
 -pthread -i-static
 linuxif9:R_LIBS:-L/opt/intel/mkl/10.2.1.017/lib/em64t 
 http://10.2.1.017/lib/em64t -lmkl_lapack -lmkl -liomp5
  -lguide -lmkl_core
 linuxif9:DPARALLEL:'-DParallel'
 linuxif9:RP_LIBS: -L/opt/intel/mkl/10.2.1.017/lib/em64t 
 http://10.2.1.017/lib/em64t -lmkl_lapack -lmkl_intel
 _lp64 -lmkl_scalapack_lp64 -lmkl_blacs_lp64 -lmkl_sequential 
 -lmkl_intel_ilp64 -
 lmkl_scalapack_ilp64  -L/opt/fftw-2.1.5/lib/ -lfftw_mpi -lfftw 
 -L/opt/mpich2/lib
  -lmpich -lfmpich 
 linuxif9:MPIRUN:mpiexec _EXEC_
 
 i am running on a xeon, CentOS5 machine.  intel noncommercial.
 
 Any information would be appreciated.
 
 Thank you,
 JD
 
 On Mon, Sep 14, 2009 at 10:31 AM, Jeff DeReus jdereus at gmail.com 
 mailto:jdereus at gmail.com wrote:
 
 fftw was compiled into /opt/fftw-2.1.5 with flags
 
   $ ./configure --prefix=/opt/fftw-2.1.5/ --enable-mpi --enable-threads
 
 Thank you,
 JD
 
 
 On Mon, Sep 14, 2009 at 10:27 AM, Laurence Marks
 L-marks at northwestern.edu mailto:L-marks at northwestern.edu wrote:
 
 It looks like you did not compile fftw (or it is somewhere else).
 
 2009/9/14 Jeff DeReus jdereus at gmail.com
 mailto:jdereus at gmail.com:
   Hello again.  i am having some issues compiling lapw0/1/2
 modules in
   wein2k_09.  i am running on a CentOS 5.3 box.  intel
 non-commercial
   compilers and mkl.
  
  
  
   here are my current parallel settings from siteconfig.
  
  Current settings:
RP  RP_LIB(SCALAPACK+PBLAS):
 -L/opt/intel/mkl/10.2.1.017/lib/em64t http://10.2.1.017/lib/em64t
   -lmkl_lapack -lmkl_intel_lp64 -lmkl_scalapack_lp64
 -lmkl_blacs_lp64
   -lmkl_sequential -L/opt/fftw-2.1.5/lib/ -lfftw_mpi -lfftw
 -L/opt/mpich2/lib
   -lmpich
FP  FPOPT(par.comp.options): $(FOPT) -FR -mp1 -w
 -prec_div -pc80 -pad
   -align -DINTEL_VML -traceback -I/opt/mpich2/include
   -I/opt/fftw-2.1.5/fortran
MP  MPIRUN commando: mpiexec _EXEC_
  
   all other modules compile correctly.  if i do not enable parallel
   functionality, the compilation completes with no errors.
  
   when compiling /opt/wien2k/SRC_lapw0 it ends with these
 errors which i have
   not been able to track down.
  
   fftw_para.o: In function `exec_fftw_para_':
   fftw_para.F:(.text+0x77): undefined reference to
 `fftwnd_f77_mpi_'
   fftw_para.F:(.text+0xb2): undefined reference to
 `fftwnd_f77_mpi_'
   fftw_para.o: In function `init_fftw_para_':
   fftw_para.F:(.text+0x101): undefined reference to
   

[Wien] parallel compilation

2009-09-15 Thread Jonas Baltrusaitis
Peter,

are you implying that it is possible to run one job on 2-4 cores in 'parallel' 
without any mpi and granularization? Could you elaborate on that?

Jonas

--- On Tue, 9/15/09, Peter Blaha pblaha at theochem.tuwien.ac.at wrote:

 From: Peter Blaha pblaha at theochem.tuwien.ac.at
 Subject: Re: [Wien] parallel compilation
 To: A Mailing list for WIEN2k users wien at zeus.theochem.tuwien.ac.at
 Date: Tuesday, September 15, 2009, 1:33 AM
  if you do not select fine
 grained parallel, siteconfig_lapw does not give you options
 to tune parallel parameters.? does that mean that it is
 not compiling parallel at all?? if i do not do fine
 tuning, i can get a complete compilation. 
 
 If one does NOT select fine grain parallel; siteconfig
 will not produce any
 mpi-executables. This is clearly the proper choice
 
 i) for beginners
 ii) on small computers, where you may not have
 installed???mpi + fftw? ;
 ? ? or have only a slow network between them.
 iii) when you intend to limit your unitcells to about 50
 atoms/cell.
 
 (Still k-point parallelization is possible, and this is
 much more efficient
 than mpi for small systems).
 
 If you select fine grain parallelization, also the
 mpi-executables will be
 produced. But you need:
 
 i) a properly installed mpi (+ mpi-fortran compiler, which
 is usually created
 ???when mpi is installed. (Usually MPF is
 NOT? ifort, but???mpif90? !!)
 ???Note: mpi2 needs some small code changes
 in lapw0.F
 ii) a corresponding Scalapack+Blacs (also suitable for your
 compiler)
 ? ? (usually it comes together with the mkl)
 iii) FFTW routines (compiled with the mpi option.
 iv) you need to properly specify the installation place
 (-L) and the
 ? ? name of the libraries (-l). Also make
 sure you understand 32-bit vs
 ? ? 64bit (emt) issues.
 
 In addition, this code is useful ONLY if you have unitcells
 with more than
 about 50 atoms  and you have hardware with fast network
 (Infiniband),
 i.e. Gbit ethernet may work, but will be slow.
 
 On a SINGLE Quadcore-computer, mpi-parallelization is
 probably NOT worth the effort
 (use? OMP_NUM_THREAD 2 (or 4))
 
 
 linuxif9:MPF:ifort? ? ? ? ?
 unlikely! (see above)
 
 
 
  # Linux PC system with IFC 10 compiler + mkl 10 (-ip
 is broken; -static does not give traceback-lines)
  linuxif9:FC:ifortlinuxif9:MPF:ifort
  linuxif9:CC:cclinuxif9:FOPT:-FR -mp1 -w
 -prec_div? -pc80 -pad -align -DINTEL_VML -traceback -O
  3 -xW
  linuxif9:FPOPT:$(FOPT) -FR -mp1 -w -prec_div -pc80
 -pad -align -DINTEL_VML -traceback
 -I/opt/mpich2/includelinuxif9:LDFLAGS:? $(FOPT)
 -L/opt/intel/mkl/10.2.1.017/lib/em64t http://10.2.1.017/lib/em64t -pthread 
 -i-static
  linuxif9:R_LIBS:-L/opt/intel/mkl/10.2.1.017/lib/em64t
 http://10.2.1.017/lib/em64t -lmkl_lapack -lmkl
 -liomp5
 ? -lguide -lmkl_core
  linuxif9:DPARALLEL:'-DParallel'
  linuxif9:RP_LIBS:
 -L/opt/intel/mkl/10.2.1.017/lib/em64t http://10.2.1.017/lib/em64t 
 -lmkl_lapack
 -lmkl_intel
  _lp64 -lmkl_scalapack_lp64 -lmkl_blacs_lp64
 -lmkl_sequential -lmkl_intel_ilp64 -
  lmkl_scalapack_ilp64? -L/opt/fftw-2.1.5/lib/
 -lfftw_mpi -lfftw -L/opt/mpich2/lib
 ? -lmpich -lfmpich? ? ? ?
 ? ? ? ? ?
 ???linuxif9:MPIRUN:mpiexec _EXEC_
  
  i am running on a xeon, CentOS5 machine.? intel
 noncommercial.
  
  Any information would be appreciated.
  
  Thank you,
  JD
  
  On Mon, Sep 14, 2009 at 10:31 AM, Jeff DeReus jdereus at gmail.com
 mailto:jdereus at gmail.com
 wrote:
  
 ? ???fftw was compiled into
 /opt/fftw-2.1.5 with flags
  
 ? ? ???$ ./configure
 --prefix=/opt/fftw-2.1.5/ --enable-mpi --enable-threads
  
 ? ???Thank you,
 ? ???JD
  
  
 ? ???On Mon, Sep 14, 2009 at 10:27
 AM, Laurence Marks
 ? ???L-marks at northwestern.edu
 mailto:L-marks at northwestern.edu
 wrote:
  
 ? ? ? ???It looks like
 you did not compile fftw (or it is somewhere else).
  
 ? ? ? ???2009/9/14 Jeff
 DeReus jdereus at gmail.com
 ? ? ? ???mailto:jdereus at gmail.com:
 ? ? ? ? ?  Hello
 again.? i am having some issues compiling lapw0/1/2
 ? ? ? ???modules in
 ? ? ? ? ? 
 wein2k_09.? i am running on a CentOS 5.3 box.?
 intel
 ? ? ? ???non-commercial
 ? ? ? ? ?  compilers and
 mkl.
 ? ? ? ? ? 
 ? ? ? ? ? 
 ? ? ? ? ? 
 ? ? ? ? ?  here are my
 current parallel settings from siteconfig.
 ? ? ? ? ? 
 ? ? ? ? ? ? ?
 Current settings:
 ? ? ? ? ? ? ?
 ? RP? RP_LIB(SCALAPACK+PBLAS):
 ? ? ?
 ???-L/opt/intel/mkl/10.2.1.017/lib/em64t
 http://10.2.1.017/lib/em64t
 ? ? ? ? ?  -lmkl_lapack
 -lmkl_intel_lp64 -lmkl_scalapack_lp64
 ? ? ?
 ???-lmkl_blacs_lp64
 ? ? ? ? ? 
 -lmkl_sequential -L/opt/fftw-2.1.5/lib/ -lfftw_mpi -lfftw
 ? ? ?
 ???-L/opt/mpich2/lib
 ? ? ? ? ?  -lmpich
 ? ? ? ? ? ? ?
 ? FP? FPOPT(par.comp.options): $(FOPT) -FR -mp1
 -w
 ? ? ? ???-prec_div -pc80
 -pad
 ? ? ? ? ?  -align
 -DINTEL_VML -traceback -I/opt/mpich2/include
 ? ? ? ? ? 
 -I/opt/fftw-2.1.5/fortran
 ? ? ? ? ? ? ?
 ? MP? MPIRUN commando? ? ? ? :
 mpiexec _EXEC_
 ? ? ? ? ? 
 ? ? ? ? ?  all other
 modules compile correctly.? if i do not enable
 parallel
 ? ? ? ? ?  

[Wien] parallel compilation

2009-09-15 Thread Jeff DeReus
/listinfo/wien
 
 



--
Laurence Marks
Department of Materials Science and Engineering
MSE Rm 2036 Cook Hall
2220 N Campus Drive
Northwestern University
Evanston, IL 60208, USA
Tel: (847) 491-3996 Fax: (847) 491-7820
email: L-marks at northwestern dot edu
Web: www.numis.northwestern.edu http://www.numis.northwestern.edu
 
Chair, Commission on Electron Crystallography of IUCR
www.numis.northwestern.edu/ http://www.numis.northwestern.edu/
Electron crystallography is the branch of science that uses
 electron
scattering and imaging to study the structure of matter.
___
Wien mailing list
Wien at zeus.theochem.tuwien.ac.at
mailto:Wien at zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien




 

 ___
 Wien mailing list
 Wien at zeus.theochem.tuwien.ac.at
 http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien


 --

  P.Blaha
 --
 Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
 Phone: +43-1-58801-15671 FAX: +43-1-58801-15698
 Email: blaha at theochem.tuwien.ac.atWWW:
 http://info.tuwien.ac.at/theochem/
 --

 ___
 Wien mailing list
 Wien at zeus.theochem.tuwien.ac.at
 http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien

-- next part --
An HTML attachment was scrubbed...
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20090915/47870aa1/attachment.htm


[Wien] Bond Valence Sums

2009-09-15 Thread Michael Fischer
Dear Laurence,

I have always used SoftBV to calculate Bond Valence Sums. It is not directly 
compatible with the Wien2k output, but it can read CIF files, which are 
converted into SHELX files before calculating the Bond Valence Sum.
I don't know how well the conversion works for any type of CIF file, but I 
applied it quite successfully a while ago. As an alternative, I think it should 
be easy to convert a CIF file in a SHELX compatible format using some other 
type of crystallographic software.

SoftBV is available online at: http://kristall.uni-mki.gwdg.de/softbv/

Hope this helps you, best regards
Michael Fischer


[Wien] parallel compilation

2009-09-15 Thread Peter Blaha
The lapack/blas calls of the mkl are parallelized. This means, that the largest 
part
of lapw1 /HNS and diagonalization) can be parallelized without any effort.

One can activiate/deactivate it using
setenv OMP_NUM_THREAD 2   or 1 (csh syntax, for bash use diferent).

After userconfig this is even commented in your .bashrc/.cshrc file.

Uncomment it and try it out.


Jonas Baltrusaitis schrieb:
 Peter,
 
 are you implying that it is possible to run one job on 2-4 cores in 
 'parallel' without any mpi and granularization? Could you elaborate on that?
 
 Jonas
 
 --- On Tue, 9/15/09, Peter Blaha pblaha at theochem.tuwien.ac.at wrote:
 
 From: Peter Blaha pblaha at theochem.tuwien.ac.at
 Subject: Re: [Wien] parallel compilation
 To: A Mailing list for WIEN2k users wien at zeus.theochem.tuwien.ac.at
 Date: Tuesday, September 15, 2009, 1:33 AM
 if you do not select fine
 grained parallel, siteconfig_lapw does not give you options
 to tune parallel parameters.  does that mean that it is
 not compiling parallel at all?  if i do not do fine
 tuning, i can get a complete compilation. 

 If one does NOT select fine grain parallel; siteconfig
 will not produce any
 mpi-executables. This is clearly the proper choice

 i) for beginners
 ii) on small computers, where you may not have
 installed   mpi + fftw  ;
 or have only a slow network between them.
 iii) when you intend to limit your unitcells to about 50
 atoms/cell.

 (Still k-point parallelization is possible, and this is
 much more efficient
 than mpi for small systems).

 If you select fine grain parallelization, also the
 mpi-executables will be
 produced. But you need:

 i) a properly installed mpi (+ mpi-fortran compiler, which
 is usually created
when mpi is installed. (Usually MPF is
 NOT  ifort, but   mpif90  !!)
Note: mpi2 needs some small code changes
 in lapw0.F
 ii) a corresponding Scalapack+Blacs (also suitable for your
 compiler)
 (usually it comes together with the mkl)
 iii) FFTW routines (compiled with the mpi option.
 iv) you need to properly specify the installation place
 (-L) and the
 name of the libraries (-l). Also make
 sure you understand 32-bit vs
 64bit (emt) issues.

 In addition, this code is useful ONLY if you have unitcells
 with more than
 about 50 atoms  and you have hardware with fast network
 (Infiniband),
 i.e. Gbit ethernet may work, but will be slow.

 On a SINGLE Quadcore-computer, mpi-parallelization is
 probably NOT worth the effort
 (use  OMP_NUM_THREAD 2 (or 4))


 linuxif9:MPF:ifort 
 unlikely! (see above)



 # Linux PC system with IFC 10 compiler + mkl 10 (-ip
 is broken; -static does not give traceback-lines)
 linuxif9:FC:ifortlinuxif9:MPF:ifort
 linuxif9:CC:cclinuxif9:FOPT:-FR -mp1 -w
 -prec_div  -pc80 -pad -align -DINTEL_VML -traceback -O
 3 -xW
 linuxif9:FPOPT:$(FOPT) -FR -mp1 -w -prec_div -pc80
 -pad -align -DINTEL_VML -traceback
 -I/opt/mpich2/includelinuxif9:LDFLAGS:  $(FOPT)
 -L/opt/intel/mkl/10.2.1.017/lib/em64t http://10.2.1.017/lib/em64t -pthread 
 -i-static
 linuxif9:R_LIBS:-L/opt/intel/mkl/10.2.1.017/lib/em64t
 http://10.2.1.017/lib/em64t -lmkl_lapack -lmkl
 -liomp5
   -lguide -lmkl_core
 linuxif9:DPARALLEL:'-DParallel'
 linuxif9:RP_LIBS:
 -L/opt/intel/mkl/10.2.1.017/lib/em64t http://10.2.1.017/lib/em64t 
 -lmkl_lapack
 -lmkl_intel
 _lp64 -lmkl_scalapack_lp64 -lmkl_blacs_lp64
 -lmkl_sequential -lmkl_intel_ilp64 -
 lmkl_scalapack_ilp64  -L/opt/fftw-2.1.5/lib/
 -lfftw_mpi -lfftw -L/opt/mpich2/lib
   -lmpich -lfmpich   
  
linuxif9:MPIRUN:mpiexec _EXEC_
 i am running on a xeon, CentOS5 machine.  intel
 noncommercial.
 Any information would be appreciated.

 Thank you,
 JD

 On Mon, Sep 14, 2009 at 10:31 AM, Jeff DeReus jdereus at gmail.com
 mailto:jdereus at gmail.com
 wrote:
  fftw was compiled into
 /opt/fftw-2.1.5 with flags
$ ./configure
 --prefix=/opt/fftw-2.1.5/ --enable-mpi --enable-threads
  Thank you,
  JD


  On Mon, Sep 14, 2009 at 10:27
 AM, Laurence Marks
  L-marks at northwestern.edu
 mailto:L-marks at northwestern.edu
 wrote:
  It looks like
 you did not compile fftw (or it is somewhere else).
  2009/9/14 Jeff
 DeReus jdereus at gmail.com
  mailto:jdereus at gmail.com:
Hello
 again.  i am having some issues compiling lapw0/1/2
  modules in
   
 wein2k_09.  i am running on a CentOS 5.3 box. 
 intel
  non-commercial
compilers and
 mkl.
   
   
   
here are my
 current parallel settings from siteconfig.
   
  
 Current settings:
  
   RP  RP_LIB(SCALAPACK+PBLAS):
  
-L/opt/intel/mkl/10.2.1.017/lib/em64t
 http://10.2.1.017/lib/em64t
-lmkl_lapack
 -lmkl_intel_lp64 -lmkl_scalapack_lp64
  
-lmkl_blacs_lp64
   
 -lmkl_sequential -L/opt/fftw-2.1.5/lib/ -lfftw_mpi -lfftw
  
-L/opt/mpich2/lib
-lmpich
  
  

[Wien] parallel compilation

2009-09-15 Thread Peter Blaha
In lapw0.F you have to replace all
   CALL MPI_ADDRESS(efgb(1)%v20,   efgb_address(2),  ierr)
by
  CALL MPI_GET_ADDRESS(efgb(1)%v20,   efgb_address(2),  ierr)

Intels ifort is also my recommendation.

Check out our faq page on wien2k.at for scripts for queing systems.
Of course, you will have to adapt it for your scheduler.

Jeff DeReus schrieb:
 Peter,
 
 i am curious as to what code changes would need to be made for mpich2.  
 especially as i am using the mpich2 libraries for compilation.
 
 are there issues with the ifort compiler?  it was my assumption that if 
 i was using the intel mkl then the intel fortran compiler would be 
 preferred. 
 
 this is being run on a 64-bit 64 node cluster.  i am interested to know 
 if anyone has had success running with a torque/maui scheduler setup and 
 threads.  as threads and processes are different and i use the OSC 
 mpiexec (as opposed to mpirun), as it talks with torque better, have 
 issues been run into in that matter?  i have personally never tried to 
 implement a thread mechanism on a cluster.  i could see some issues with 
 resource allocation but if anyone has run into it, any information would 
 be appreciated.
 
 Thank you,
 JD
 
 
 On Tue, Sep 15, 2009 at 3:33 AM, Peter Blaha 
 pblaha at theochem.tuwien.ac.at mailto:pblaha at theochem.tuwien.ac.at 
 wrote:
 
 if you do not select fine grained parallel, siteconfig_lapw does
 not give you options to tune parallel parameters.  does that
 mean that it is not compiling parallel at all?  if i do not do
 fine tuning, i can get a complete compilation.
 
 
 If one does NOT select fine grain parallel; siteconfig will not
 produce any
 mpi-executables. This is clearly the proper choice
 
 i) for beginners
 ii) on small computers, where you may not have installed   mpi +
 fftw  ;
or have only a slow network between them.
 iii) when you intend to limit your unitcells to about 50 atoms/cell.
 
 (Still k-point parallelization is possible, and this is much more
 efficient
 than mpi for small systems).
 
 If you select fine grain parallelization, also the mpi-executables
 will be
 produced. But you need:
 
 i) a properly installed mpi (+ mpi-fortran compiler, which is
 usually created
   when mpi is installed. (Usually MPF is NOT  ifort, but   mpif90  !!)
   Note: mpi2 needs some small code changes in lapw0.F
 ii) a corresponding Scalapack+Blacs (also suitable for your compiler)
(usually it comes together with the mkl)
 iii) FFTW routines (compiled with the mpi option.
 iv) you need to properly specify the installation place (-L)
 and the
name of the libraries (-l). Also make sure you understand
 32-bit vs
64bit (emt) issues.
 
 In addition, this code is useful ONLY if you have unitcells with
 more than
 about 50 atoms  and you have hardware with fast network
 (Infiniband),
 i.e. Gbit ethernet may work, but will be slow.
 
 On a SINGLE Quadcore-computer, mpi-parallelization is probably NOT
 worth the effort
 (use  OMP_NUM_THREAD 2 (or 4))
 
 
 linuxif9:MPF:ifort  unlikely! (see above)
 
 
 
 # Linux PC system with IFC 10 compiler + mkl 10 (-ip is broken;
 -static does not give traceback-lines)
 linuxif9:FC:ifortlinuxif9:MPF:ifort
 linuxif9:CC:cclinuxif9:FOPT:-FR -mp1 -w -prec_div  -pc80 -pad
 -align -DINTEL_VML -traceback -O
 3 -xW
 linuxif9:FPOPT:$(FOPT) -FR -mp1 -w -prec_div -pc80 -pad -align
 -DINTEL_VML -traceback -I/opt/mpich2/includelinuxif9:LDFLAGS:
  $(FOPT) -L/opt/intel/mkl/10.2.1.017/lib/em64t
 http://10.2.1.017/lib/em64t http://10.2.1.017/lib/em64t
 -pthread -i-static
 linuxif9:R_LIBS:-L/opt/intel/mkl/10.2.1.017/lib/em64t
 http://10.2.1.017/lib/em64t http://10.2.1.017/lib/em64t
 -lmkl_lapack -lmkl -liomp5
 
  -lguide -lmkl_core
 linuxif9:DPARALLEL:'-DParallel'
 linuxif9:RP_LIBS: -L/opt/intel/mkl/10.2.1.017/lib/em64t
 http://10.2.1.017/lib/em64t http://10.2.1.017/lib/em64t
 -lmkl_lapack -lmkl_intel
 
 _lp64 -lmkl_scalapack_lp64 -lmkl_blacs_lp64 -lmkl_sequential
 -lmkl_intel_ilp64 -
 lmkl_scalapack_ilp64  -L/opt/fftw-2.1.5/lib/ -lfftw_mpi -lfftw
 -L/opt/mpich2/lib
  -lmpich -lfmpich linuxif9:MPIRUN:mpiexec _EXEC_
 
 i am running on a xeon, CentOS5 machine.  intel noncommercial.
 
 Any information would be appreciated.
 
 Thank you,
 JD
 
 On Mon, Sep 14, 2009 at 10:31 AM, Jeff DeReus jdereus at gmail.com
 mailto:jdereus at gmail.com mailto:jdereus at gmail.com
 mailto:jdereus at gmail.com wrote:
 
fftw was compiled into /opt/fftw-2.1.5 with flags
 
  $ ./configure --prefix=/opt/fftw-2.1.5/ --enable-mpi
  

[Wien] Crystal field splitting in empty 3d band of Fe2O3

2009-09-15 Thread Yang Ding
Dear WIEN2k  users,

I am really new to WIEN2k, and wondering if you could give your advice 
and experience on following question concerning the crystal filed 
splitting calculated from WIEN2k.

In order to understand if the pre-edge splitting appearing in the Fe 
K-edge  spectra (1s-4p transition) measured by emission-XANES on Fe2O3 
[Groot et al. J. Phys.: Condens. Matter 21 (2009) 104207 
http://www.iop.org/EJ/abstract/0953-8984/21/10/104207/], is linked to 
crystal-filed splitting in 3d empty band. We did a very preliminary 
ground state calculation using WIEN2k based on GGA+U (and LSDA+U) with U 
= 4 eV structure to check the crystal field splitting in empty d band 
above Fermi level.

As a result, we found that above 2-6 eV above Fermi level, the energy of 
t2g is higher than that of eg. This result is similar to what reported 
by Rollsman et al (PHYSICAL REVIEW B 69, 165107 (2004) 
http://prola.aps.org/abstract/PRB/v69/i16/e165107) on Fe2O3. In his 
calculation (GGA/LSDA+U , U= 4eV), the energy of t2g is also higher than 
that of eg. So my question is why the t2g and eg are reversed in DFT, 
but the Multiplet calculation gives contradictory results (i.e from 
Groot et al.).

I noticed that  Glatzel et al (PHYSICAL REVIEW B 77, 115133 (2008) 
http://prola.aps.org/abstract/PRB/v69/i16/e165107) reported that they 
obtained the right crystal field splitting using (LDA+U, U=6 eV) from 
WIEN2k.   So we wonder if we might missed something in the calculations?

Thanks  in advance for your help,
-- 

Yang Ding 
http://www.aps.anl.gov/Users/Scientific_Interest_Groups/HPSynC/people/%7EYDing.html

Staff Scientist

RM-B3180/Blgd-401

HPSynC at Advanced Photon Source

Argonne National Laboratory

9700 S. Cass Avenue

Argonne, IL 60439

Phone: 630-252-6288

Email: yangding at aps.anl.gov

-- next part --
An HTML attachment was scrubbed...
URL: 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/attachments/20090915/6bb07f33/attachment.htm