Re: [SIESTA-L] Posible SPAM: elastic constant calculations

2007-03-20 Thread Chun Li
Thank you for your message.

I think if we relax the other two lattice parameters at the same time, the 
obtained values should be Young's modulus, but not C_33.

I have checked my calculations, the k points and cutoff energies have been 
carefully tested. I wonder if the error comes from the basis sets we used 
(DZP)? Is there anyone who used to perform similar calculations? Is that a 
common error in SIESTA? I urgently need your help, thank you!

Best regards,

Chun
  - Original Message - 
  From: Vasilii Artyukhov 
  To: SIESTA-L@listserv.uam.es 
  Sent: Monday, March 19, 2007 4:41 PM
  Subject: Re: [SIESTA-L] Posible SPAM: elastic constant calculations


  I believe that if you're willing to apply strain in the c direction only, you 
might also want to relax the other two lattice parameters. Obviously this will 
decrease your values, maybe to something reasonable.


  2007/3/19, Chun Li [EMAIL PROTECTED]: 
Dear SIESTA users,

I used SIESTA to calculate the elastic constant C_33 of bulk ZnO. Both LDA 
and GGA are employed, respectively. But the obtained results are much larger 
than the experimental value (for LDA, I got 358 GPa; GGA, 282 GPa; but the 
experimental value is around 210 GPa). I wonder why such big error appears? 
What important factors might be responsible for that? Could anyone give me some 
ideas? 

In my calculations, first the cell is fully relaxed; after that I apply 
different strains in the c direction and evaluate their total energies, 
respectively. Then I fit the obtained energies to obtain the elastic constant. 
In both relaxation and energy calculations, double-zeta polarized numerical 
atomic orbitals basis sets for both Zn and O are used. 

Thank you and best regards,

Chun Li




Re: [SIESTA-L] paralle problem

2006-09-27 Thread Chun Li
Hi, Kun,

It seems I have also met the similar problem before. But I tried using mpich-gm 
to compile then the parallel siesta-2.0 can run efficiently on our cluster 
(also a AMD-Opteron cluster). So I think you can try pgi. The following is my 
arch.make file, hope that helps.

Chun

SIESTA_ARCH=pgf90-gm
#
FC=/share/data/software/mpich-gm/pgi/bin/mpif90
FC_ASIS=$(FC)
#
FFLAGS= -fastsse
FFLAGS_DEBUG= -g -O0
RANLIB=echo
COMP_LIBS=dc_lapack.a
#
#NETCDF_LIBS=/share/data/software/netcdf-3.6.1/lib/libnetcdf.a
#NETCDF_INTERFACE=libnetcdf_f90.a
#DEFS_CDF=-DCDF
#
MPI_ROOT_MYRINET=/share/data/software/mpich-gm/pgi
MYRINET_LIBS=-L/opt/gm/lib64 -lgm
MPI_LIBS_MYRINET= -L$(MPI_ROOT_MYRINET)/lib -lmpich $(MYRINET_LIBS)
#
MPI_ROOT_MPICH=//share/data/software/mpich-gm/pgi
MPI_LIBS_MPICH= -L$(MPI_ROOT_MPICH)/lib -lmpich 
#
MPI_ROOT=$(MPI_ROOT_MYRINET)
MPI_LIBS=$(MPI_LIBS_MYRINET)

#
MPI_INTERFACE=libmpi_f90.a
MPI_INCLUDE=$(MPI_ROOT)/include
DEFS_MPI=-DMPI
#

LIBS= -L/share/data/software/acml270/acml2.7.0/pgi64/lib  -lscalapack \
   -lblacs -lblacsF77init -lblacsCinit -lblacsF77init -lblacs \
   -llapack -lblas \
   $(MPI_LIBS)  $(NETCDF_LIBS)
SYS=cpu_time
DEFS= $(DEFS_CDF) $(DEFS_MPI)
#
#
# Important (at least for V5.0-1 of the pgf90 compiler...)
# Compile atom.f and electrostatic.f without optimization.
#
atom.o:
$(FC) -c $(FFLAGS_DEBUG) atom.f
#
electrostatic.o:
$(FC) -c $(FFLAGS_DEBUG) electrostatic.f
#
.F.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)  $(DEFS) $
.f.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)   $
.F90.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)  $(DEFS) $
.f90.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)   $
#

- Original Message - 
From: Tao K [EMAIL PROTECTED]
To: SIESTA-L@listserv.uam.es
Sent: Wednesday, September 27, 2006 5:13 PM
Subject: [SIESTA-L] paralle problem


 Dear Siesta user,
 
 I have compiled the parallel Siesta-2.0 on a AMD-Opteron cluster. When I
 tried to run it in parallel on 6 nodes, however, I found the head of the
 out file was :
 
 Siesta Version: siesta-2.0-release
 Architecture  : x86_64-unknown-linux-gnu--Nag
 Compiler flags: mpif90 -axP -xW -ip -O3 
 SERIAL version
 
 * Running in serial mode
 ... .
 
 Does it mean that it runs in sequential not in parallel? But when I
 checked the nodes, I found the task was running on 6 nodes. I don't know
 whether these 6 nodes run the same job in parallel or each of them runs
 the job once time (so the same job was done 6 times by these 6 nodes).
 
 Attached below is my arch.make file. Any opinion would be appreciated!
 Thanks in advance!
 
 Sincerely Yours,
 Kun Tao
 






# 
 # This file is part of the SIESTA package.
 #
 # Copyright (c) Fundacion General Universidad Autonoma de Madrid:
 # E.Artacho, J.Gale, A.Garcia, J.Junquera, P.Ordejon, D.Sanchez-Portal
 # and J.M.Soler, 1996-2006.
 # 
 # Use of this software constitutes agreement with the full conditions
 # given in the SIESTA license, as signed by all legitimate users.
 #
 .SUFFIXES:
 .SUFFIXES: .f .F .o .a .f90 .F90
 
 SIESTA_ARCH=x86_64-unknown-linux-gnu--Nag
 
 FPP=
 FPP_OUTPUT= 
 FC=mpif90
 # FC=ifort 
 RANLIB=ranlib
 
 SYS=nag
 
 SP_KIND=4
 DP_KIND=8
 KINDS=$(SP_KIND) $(DP_KIND)
 
 FFLAGS=-axP -xW -ip -O3  
 FPPFLAGS= -DFC_HAVE_FLUSH -DFC_HAVE_ABORT  
 LDFLAGS= 
 
 ARFLAGS_EXTRA=
 
 FCFLAGS_fixed_f=
 FCFLAGS_free_f90= 
 FLAGS_fixed_F=
 FPPFLAGS_free_F90= 
 
 BLAS_LIBS=/data/intel/cmkl/8.1/lib/64/libguide.a
 LAPACK_LIBS=/data/intel/cmkl/8.1/lib/64/libmkl_lapack.a
 BLACS_LIBS=/data/intel/cmkl/8.1/lib/64/libmkl_blacs.a
 SCALAPACK_LIBS=-L/data/intel/cmkl/8.1/lib/64/libmkl_scalapack.a 
 /data/BLACS/LIB/blacsF77init_MPI-LINUX-0.a /data/BLACS/LIB/blacs_MPI-LINUX-0.a
 
 COMP_LIBS=linalg.a 
 
 LIBS=$(SCALAPACK_LIBS) $(BLACS_LIBS) $(LAPACK_LIBS) $(BLAS_LIBS)  
 
 
 #SIESTA needs an F90 interface to MPI
 #This will give you SIESTA's own implementation
 #If your compiler vendor offers an alternative, you may change
 #to it here.
 MPI_INTERFACE=libmpi_f90.a
 MPI_INCLUDE=/data/mpich_ifort/include
 
 DEFS_MPI=-DMPI
 DEFS= $(DEFS_MPI)
 
 #Dependency rules are created by autoconf according to whether
 #discrete preprocessing is necessary or not.
 .F.o:
 $(FC) -c $(FFLAGS) $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_fixed_F)  $ 
 .F90.o:
 $(FC) -c $(FFLAGS) $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_free_F90) $ 
 .f.o:
 $(FC) -c $(FFLAGS) $(INCFLAGS) $(FCFLAGS_fixed_f)  $
 .f90.o:
 $(FC) -c $(FFLAGS) $(INCFLAGS) $(FCFLAGS_free_f90)  $
 




Re: [SIESTA-L] [SIESTA-L: [ SIESTA-L] ( )Re: [SIESTA-L] Test error?

2006-09-14 Thread Chun Li
Hi, Dr. Kong,

Thank you very much for your timely message!
Following your advice, I have managed to run parallel SIESTA on our cluster :).

Best regards.

Chun
  - Original Message - 
  From: Yong Kong 
  To: SIESTA-L@listserv.uam.es 
  Sent: Thursday, September 14, 2006 10:06 PM
  Subject: Re: [SIESTA-L] R e: [ SIESTA-L] ( )Re: [SIESTA-L] Test error?


  Hi Chun,

  I just came back to Stuttgart and saw your post in the list. 

  Previously we met similar problem when running siesta-2.0 on our IBM p5 
nodes. After trying all kinds of compiling options, 
  we finally found the problem is because we use our old input fdf file 
directly from siesta-1.3. In that input file, we use 
  AllocReportLevel = 3. However, that doesn't work for siesta-2.0. Simply 
delete it from your fdf file or set it to 0. 
  It does work for us.

  --Yong



   
  On 9/9/06, Chun Li [EMAIL PROTECTED] wrote: 
 
Hi, I have tried that, but the same problem still exist.
I wonder is there anyone who have never run parallel SIESTA successfully 
like me? I am really nervous about that.

Chun
  - Original Message - 
  From: Jyh-Shyong Ho 
  To: SIESTA-L@listserv.uam.es 
  Sent: Friday, September 08, 2006 5:25 PM
  Subject: (瑞星提示-此邮件可能是垃圾邮件)Re: [SIESTA-L] ()Re: [SIESTA-L] Test error?

   
  Please try the PGI version, and see if it works on your computer.

  Ho

  Chun Li wrote:

Hi, Thank you very much!

I have modified your intel-mkl arch.make file and managed to build 
parallel siesta on our cluster. But the strange problem still exists!

In the beginning:

Siesta Version: siesta-2.0-release
Architecture  : x86_64-unknown-linux-gnu--Intel
Compiler flags: /opt/mpich/intel/bin/mpif90  -no-ipo -g -w -mp -tpp7 
-O3 -no-ipo
PARALLEL version
NetCDF-capable

* Running on4 nodes in parallel
 Start of run:   8-SEP-2006  17:12:49

When the output reach the following lines, there is not any output then!

siesta:  Simulation parameters 

siesta:
siesta: The following are some of the parameters of the simulation.
siesta: A complete list of the parameters used, including default 
values, 
siesta: can be found in file out.fdf
siesta:
coor:   Atomic-coordinates input format  = Cartesian coordinates
coor:  (in Angstroms)

I wonder what happened??

Any idea or suggestions are highly appreciated.

Chun

  - Original Message - 
  From: Jyh-Shyong Ho 
  To: SIESTA-L@listserv.uam.es 
  Sent: Friday, September 08, 2006 1:09 PM
  Subject: (瑞星提示-此邮件可能是垃圾邮件)Re: [SIESTA-L] (鐟炴槦鎻愮ず-姝?閭欢鍙?鑳芥槸鍨冨溇閭欢)Re: 
[SIESTA-L] Test error?

   
  Hi,

  I have managed to build parallel SIESTA successfully on our dual-core 
  dual Opteron cluster with the following combination:

  1. Intel compiler 9
  2. OpenMPI library built with Intel compiler 9 
  3. Intel CMKL 8.0 library for blas, lapack, BLACS and Scalapack 
library.
  4. NETCDF built with Intel compiler 9

  and

  1. PGI 6.1 compiler
  2. OpenMPI library built with PGI compiler
  3. BLACS/SCALAPACK library built with OpenMPI/PGI compiler 
  4. NETCDF built with PGI compiler

  Perhaps you can try these combination. I have tried mpich and lam
  but failed to build the executable successfully, so I change to 
OpenMPI,
  I found that OpenMPI is much more robust than MPICH and LAM. 

  Jyh-Shyong Ho, Ph.D.
  Research Scientist
  National Center for High Performance Computing
  Hsinchi, Taiwan, ROC

  Chun Li wrote:

Hi, Nestor Correia,

Thanks for your kind help. Unfortunately, I still cannot solve the problem. I 
have spent many days on this strange problem, tried to use mpich-pgi, 
mpich-intel, intel-mkl, mpich-gm. etc. and feel really.. I wonder if it is 
impossible to run parallel siesta on our cluster!

Thank you anyway.

Chun

- Original Message - 
From: Nestor Correia [EMAIL PROTECTED]
To: SIESTA-L@listserv.uam.es
Sent: Thursday, September 07, 2006 10:23 PM
Subject: (瑞星提示-此邮件可能是垃圾邮件)Re: [SIESTA-L] Test error?


Hi Chun Li,

This one works fine. I followed the instructions from Sebastien Le Roux
look in archives. I needed to recompile all libraries.


[EMAIL PROTECTED] Src]$ more arch.make

#
# This file is part of the SIESTA package.
#
# Copyright (c) Fundacion General Universidad Autonoma de Madrid:
# E.Artacho, J.Gale, A.Garcia, J.Junquera, P.Ordejon, D.Sanchez-Portal
# and J.M.Soler, 1996-2006.
#
# Use of this software constitutes agreement with the full conditions
# given in the SIESTA license, as signed by all legitimate users.
#
SIESTA_ARCH=pgf90-mpich

Re: [SIESTA-L] parallel runing problem

2006-09-10 Thread Chun Li
Hi, Marcos,

Thank you for your suggestions and kind words.

I wonder if it is the problem of ssh/rsh? Our cluster is ssh by default. But in 
the SIESTA guide, there is no statement about that?

Chun

- Original Message - 
From: Marcos Verissimo Alves [EMAIL PROTECTED]
To: SIESTA-L@listserv.uam.es
Sent: Sunday, September 10, 2006 2:30 AM
Subject: Re: [SIESTA-L]


Hi Chun,

A while ago I posted a text file, to the list, with a detailed set of
instructions for compiling siesta and many other libraries from scratch.
It must be on the mailing list archive. It might help you.

One word of caution: looking at your optimization options, I see you have
set one of them to -O3. Be careful with -O3, sometimes, besides making
your code slower, can give you incorrect results. Of course this will only
be a problem when you get siesta to run, but then you'll have to worry
about this kind of things for efficiency matters.

If this is of any consolation, it took me a few months to correctly
compile and run siesta in a cluster, from the first time. It was siesta
1.3f1p, but as far as I see, a good many people have a hard time compiling
and running siesta in parallel right.

Cheers,

Marcos

 Hi, I have tried that, but the same problem still exist.
 I wonder is there anyone who have never run parallel SIESTA successfully
 like me? I am really nervous about that.

 Chun
   - Original Message -
   From: Jyh-Shyong Ho
   To: SIESTA-L@listserv.uam.es
   Sent: Friday, September 08, 2006 5:25 PM
   Subject: (ç'zæ~Y提示-æ­¤é,®ä»¶å¯èf½æ~¯åzfåo¾é,®ä»¶)Re: [SIESTA-L]
 ()Re: [SIESTA-L] Test error?


   Please try the PGI version, and see if it works on your computer.

   Ho

   Chun Li wrote:

 Hi, Thank you very much!

 I have modified your intel-mkl arch.make file and managed to build
 parallel siesta on our cluster. But the strange problem still exists!

 In the beginning:

 Siesta Version: siesta-2.0-release
 Architecture  : x86_64-unknown-linux-gnu--Intel
 Compiler flags: /opt/mpich/intel/bin/mpif90  -no-ipo -g -w -mp -tpp7
 -O3 -no-ipo
 PARALLEL version
 NetCDF-capable

 * Running on4 nodes in parallel
  Start of run:   8-SEP-2006  17:12:49

 When the output reach the following lines, there is not any output
 then!

 siesta:  Simulation parameters
 
 siesta:
 siesta: The following are some of the parameters of the simulation.
 siesta: A complete list of the parameters used, including default
 values,
 siesta: can be found in file out.fdf
 siesta:
 coor:   Atomic-coordinates input format  = Cartesian coordinates
 coor:  (in Angstroms)

 I wonder what happened??

 Any idea or suggestions are highly appreciated.

 Chun

   - Original Message -
   From: Jyh-Shyong Ho
   To: SIESTA-L@listserv.uam.es
   Sent: Friday, September 08, 2006 1:09 PM
   Subject: (ç'zæ~Y提示-æ­¤é,®ä»¶å¯èf½æ~¯åzfåo¾é,®ä»¶)Re: [SIESTA-L]
 (éYç,´æ§¦éZ»æ®ãs-姝?é-­î?»æ¬¢éT?é'³èS¥æ§¸é¨å?¨æº?é-­î?»æ¬¢)Re:
 [SIESTA-L] Test error?


   Hi,

   I have managed to build parallel SIESTA successfully on our
 dual-core
   dual Opteron cluster with the following combination:

   1. Intel compiler 9
   2. OpenMPI library built with Intel compiler 9
   3. Intel CMKL 8.0 library for blas, lapack, BLACS and Scalapack
 library.
   4. NETCDF built with Intel compiler 9

   and

   1. PGI 6.1 compiler
   2. OpenMPI library built with PGI compiler
   3. BLACS/SCALAPACK library built with OpenMPI/PGI compiler
   4. NETCDF built with PGI compiler

   Perhaps you can try these combination. I have tried mpich and lam
   but failed to build the executable successfully, so I change to
 OpenMPI,
   I found that OpenMPI is much more robust than MPICH and LAM.

   Jyh-Shyong Ho, Ph.D.
   Research Scientist
   National Center for High Performance Computing
   Hsinchi, Taiwan, ROC

   Chun Li wrote:

 Hi, Nestor Correia,

 Thanks for your kind help. Unfortunately, I still cannot solve the
 problem. I have spent many days on this strange problem, tried to use
 mpich-pgi, mpich-intel, intel-mkl, mpich-gm. etc. and feel really.. I
 wonder if it is impossible to run parallel siesta on our cluster!

 Thank you anyway.

 Chun

 - Original Message -
 From: Nestor Correia [EMAIL PROTECTED]
 To: SIESTA-L@listserv.uam.es
 Sent: Thursday, September 07, 2006 10:23 PM
 Subject: (ç'zæ~Y提示-æ­¤é,®ä»¶å¯èf½æ~¯åzfåo¾é,®ä»¶)Re: [SIESTA-L] Test
 error?


 Hi Chun Li,

 This one works fine. I followed the instructions from Sebastien Le Roux
 look in archives. I needed to recompile all libraries.


 [EMAIL PROTECTED] Src]$ more arch.make

 #
 # This file is part of the SIESTA package.
 #
 # Copyright (c) Fundacion General Universidad Autonoma de Madrid:
 # E.Artacho, J.Gale, A.Garcia

Re: [SIESTA-L] (瑞星提示-此邮件可 能是垃圾邮件)Re: [SIESTA-L] ( )Re: [SIESTA-L] Test error?

2006-09-09 Thread Chun Li
Hi, I have tried that, but the same problem still exist.
I wonder is there anyone who have never run parallel SIESTA successfully like 
me? I am really nervous about that.

Chun
  - Original Message - 
  From: Jyh-Shyong Ho 
  To: SIESTA-L@listserv.uam.es 
  Sent: Friday, September 08, 2006 5:25 PM
  Subject: (瑞星提示-此邮件可能是垃圾邮件)Re: [SIESTA-L] ()Re: [SIESTA-L] Test error?


  Please try the PGI version, and see if it works on your computer.

  Ho

  Chun Li wrote:

Hi, Thank you very much!

I have modified your intel-mkl arch.make file and managed to build parallel 
siesta on our cluster. But the strange problem still exists!

In the beginning:

Siesta Version: siesta-2.0-release
Architecture  : x86_64-unknown-linux-gnu--Intel
Compiler flags: /opt/mpich/intel/bin/mpif90  -no-ipo -g -w -mp -tpp7 -O3 
-no-ipo
PARALLEL version
NetCDF-capable

* Running on4 nodes in parallel
 Start of run:   8-SEP-2006  17:12:49

When the output reach the following lines, there is not any output then!

siesta:  Simulation parameters 

siesta:
siesta: The following are some of the parameters of the simulation.
siesta: A complete list of the parameters used, including default values,
siesta: can be found in file out.fdf
siesta:
coor:   Atomic-coordinates input format  = Cartesian coordinates
coor:  (in Angstroms)

I wonder what happened??

Any idea or suggestions are highly appreciated.

Chun

  - Original Message - 
  From: Jyh-Shyong Ho 
  To: SIESTA-L@listserv.uam.es 
  Sent: Friday, September 08, 2006 1:09 PM
  Subject: (瑞星提示-此邮件可能是垃圾邮件)Re: [SIESTA-L] (鐟炴槦鎻愮ず-姝?閭欢鍙?鑳芥槸鍨冨溇閭欢)Re: 
[SIESTA-L] Test error?


  Hi,

  I have managed to build parallel SIESTA successfully on our dual-core 
  dual Opteron cluster with the following combination:

  1. Intel compiler 9
  2. OpenMPI library built with Intel compiler 9
  3. Intel CMKL 8.0 library for blas, lapack, BLACS and Scalapack library.
  4. NETCDF built with Intel compiler 9

  and

  1. PGI 6.1 compiler
  2. OpenMPI library built with PGI compiler
  3. BLACS/SCALAPACK library built with OpenMPI/PGI compiler
  4. NETCDF built with PGI compiler

  Perhaps you can try these combination. I have tried mpich and lam
  but failed to build the executable successfully, so I change to OpenMPI,
  I found that OpenMPI is much more robust than MPICH and LAM.

  Jyh-Shyong Ho, Ph.D.
  Research Scientist
  National Center for High Performance Computing
  Hsinchi, Taiwan, ROC

  Chun Li wrote:

Hi, Nestor Correia,

Thanks for your kind help. Unfortunately, I still cannot solve the problem. I 
have spent many days on this strange problem, tried to use mpich-pgi, 
mpich-intel, intel-mkl, mpich-gm. etc. and feel really.. I wonder if it is 
impossible to run parallel siesta on our cluster!

Thank you anyway.

Chun

- Original Message - 
From: Nestor Correia [EMAIL PROTECTED]
To: SIESTA-L@listserv.uam.es
Sent: Thursday, September 07, 2006 10:23 PM
Subject: (瑞星提示-此邮件可能是垃圾邮件)Re: [SIESTA-L] Test error?


Hi Chun Li,

This one works fine. I followed the instructions from Sebastien Le Roux
look in archives. I needed to recompile all libraries.


[EMAIL PROTECTED] Src]$ more arch.make

#
# This file is part of the SIESTA package.
#
# Copyright (c) Fundacion General Universidad Autonoma de Madrid:
# E.Artacho, J.Gale, A.Garcia, J.Junquera, P.Ordejon, D.Sanchez-Portal
# and J.M.Soler, 1996-2006.
#
# Use of this software constitutes agreement with the full conditions
# given in the SIESTA license, as signed by all legitimate users.
#
SIESTA_ARCH=pgf90-mpich
#
#FC=pgf90
FC=mpif90
FC_ASIS=$(FC)
#
FFLAGS= -fast
FFLAGS_DEBUG= -g -O0
RANLIB=echo
COMP_LIBS=dc_lapack.a
#
NETCDF_LIBS= #  /usr/local/netcdf-3.5/lib/pgi/libnetcdf.a
NETCDF_INTERFACE=#  libnetcdf_f90.a
DEFS_CDF=#  -DCDF
#
MPI_INTERFACE=libmpi_f90.a
#MPI_INCLUDE=/usr/local/include
MPI_INCLUDE=/usr/local/ibg2/mpi/pgi/mvapich-0.9.5-mlx2.0.1/include
DEFS_MPI=-DMPI
#
# There are (were?) some problems with command-line processing
compatibility
# that forced the extraction of pgi.aux and pgiarg as independent
# libraries (details unfortunately lost)
#
#LIBS= -L/usr/local/lib/pgi \
#  -lscalapack -l1upblas -l1utools -l.pgi.aux -lredist \
#  -lfblacs  -llapack -lblas \
#   -l1umpich -lpgiarg $(NETCDF_LIBS)
HOME_LIB=/home/ranescor/lib
LIBS= -L/home/ranescor/lib \
-lscalapack \
$(HOME_LIB)/blacsCinit_MPI-LINUX-0.a
$(HOME_LIB)/blacsF77init_MPI-LINUX-0.a $(HOME_LIB)/blacs_MPI-LINUX-0.a \
   -llapack  -lblas \
#   -l1upblas -l1utools -l.pgi.aux -lredist \
#  -lfblacs  -llapack -lblas \
#   -l1umpich -lpgiarg $(NETCDF_LIBS)
SYS=cpu_time
DEFS= $(DEFS_CDF) $(DEFS_MPI)
#
#
# Important (at least for V5.0-1

Re: [SIESTA-L] (瑞星提示-此邮件可 能是垃圾邮件)Re: [SIESTA-L] Test error?

2006-09-08 Thread Chun Li
Hi, Nestor Correia,

Thanks for your kind help. Unfortunately, I still cannot solve the problem. I 
have spent many days on this strange problem, tried to use mpich-pgi, 
mpich-intel, intel-mkl, mpich-gm. etc. and feel really.. I wonder if it is 
impossible to run parallel siesta on our cluster!

Thank you anyway.

Chun

- Original Message - 
From: Nestor Correia [EMAIL PROTECTED]
To: SIESTA-L@listserv.uam.es
Sent: Thursday, September 07, 2006 10:23 PM
Subject: (瑞星提示-此邮件可能是垃圾邮件)Re: [SIESTA-L] Test error?


Hi Chun Li,

This one works fine. I followed the instructions from Sebastien Le Roux
look in archives. I needed to recompile all libraries.


[EMAIL PROTECTED] Src]$ more arch.make

#
# This file is part of the SIESTA package.
#
# Copyright (c) Fundacion General Universidad Autonoma de Madrid:
# E.Artacho, J.Gale, A.Garcia, J.Junquera, P.Ordejon, D.Sanchez-Portal
# and J.M.Soler, 1996-2006.
#
# Use of this software constitutes agreement with the full conditions
# given in the SIESTA license, as signed by all legitimate users.
#
SIESTA_ARCH=pgf90-mpich
#
#FC=pgf90
FC=mpif90
FC_ASIS=$(FC)
#
FFLAGS= -fast
FFLAGS_DEBUG= -g -O0
RANLIB=echo
COMP_LIBS=dc_lapack.a
#
NETCDF_LIBS= #  /usr/local/netcdf-3.5/lib/pgi/libnetcdf.a
NETCDF_INTERFACE=#  libnetcdf_f90.a
DEFS_CDF=#  -DCDF
#
MPI_INTERFACE=libmpi_f90.a
#MPI_INCLUDE=/usr/local/include
MPI_INCLUDE=/usr/local/ibg2/mpi/pgi/mvapich-0.9.5-mlx2.0.1/include
DEFS_MPI=-DMPI
#
# There are (were?) some problems with command-line processing
compatibility
# that forced the extraction of pgi.aux and pgiarg as independent
# libraries (details unfortunately lost)
#
#LIBS= -L/usr/local/lib/pgi \
#  -lscalapack -l1upblas -l1utools -l.pgi.aux -lredist \
#  -lfblacs  -llapack -lblas \
#   -l1umpich -lpgiarg $(NETCDF_LIBS)
HOME_LIB=/home/ranescor/lib
LIBS= -L/home/ranescor/lib \
-lscalapack \
$(HOME_LIB)/blacsCinit_MPI-LINUX-0.a
$(HOME_LIB)/blacsF77init_MPI-LINUX-0.a $(HOME_LIB)/blacs_MPI-LINUX-0.a \
   -llapack  -lblas \
#   -l1upblas -l1utools -l.pgi.aux -lredist \
#  -lfblacs  -llapack -lblas \
#   -l1umpich -lpgiarg $(NETCDF_LIBS)
SYS=cpu_time
DEFS= $(DEFS_CDF) $(DEFS_MPI)
#
#
# Important (at least for V5.0-1 of the pgf90 compiler...)
# Compile atom.f and electrostatic.f without optimization.
#
atom.o:
$(FC) -c $(FFLAGS_DEBUG) atom.f
#
electrostatic.o:
$(FC) -c $(FFLAGS_DEBUG) electrostatic.f
#
.F.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)  $(DEFS) $
.f.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)   $
.F90.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)  $(DEFS) $
.f90.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)   $
#
[EMAIL PROTECTED] Src]$



On Thu, 2006-09-07 at 21:30 +0800, Chun Li wrote:
 Hi, is there anyone can give me a arch.make file for parallel siesta?
 Thanks a lot!
 I am really confused about the problem.
  
 Regards.
  
 Chun
 - Original Message - 
 From: Chun Li 
 To: SIESTA-L@listserv.uam.es 
 Sent: Wednesday, September 06, 2006 9:10 AM
 Subject: (瑞星提示-此邮件可能是垃圾邮件)[SIESTA-L] Test error?
 
 
 Dear all,
  
 I just compiled parallel siesta-2.0 on our 64bit 2G Opteron
 CPU cluster. But when I tried to make the examples in the Test
 folder, the following error appeared (e.g. Fe):
  
  Running fe test...
 == Copying pseudopotential file for Fe...
 == Running SIESTA as ../../../Src/siesta
 MPICH-GM Error: Need to obtain the job magic number in
 GMPI_MAGIC !
 make: *** [completed] Error 141
 
 I tried other test examples, the same error also occured!
  
 I also tried to submit my own calculation test to our cluster
 with 4 or 8 CPUs, it can run normally in parallel mode just in
 the beginning, but hangs there without any error or output
 after about two minites. The last lines in the output file
 are:
  
 siesta:  Simulation parameters
 
 siesta:
 siesta: The following are some of the parameters of the
 simulation.
 siesta: A complete list of the parameters used, including
 default values,
 siesta: can be found in file out.fdf
 siesta:
 coor:   Atomic-coordinates input format  = Cartesian
 coordinates
 coor:  (in Angstroms)
  
 Could anyone give me some ideas? My arch.make file is as
 follows:
  
 # 
 # This file is part of the SIESTA package.
 #
 # Copyright (c) Fundacion General Universidad Autonoma de
 Madrid:
 # E.Artacho, J.Gale, A.Garcia, J.Junquera, P.Ordejon,
 D.Sanchez-Portal
 # and J.M.Soler, 1996-2006.
 # 
 # Use of this software constitutes agreement

Re: [SIESTA-L] Test error?

2006-09-07 Thread Chun Li
Hi, is there anyone can give me a arch.make file for parallel siesta? Thanks a 
lot!
I am really confused about the problem.

Regards.

Chun
  - Original Message - 
  From: Chun Li 
  To: SIESTA-L@listserv.uam.es 
  Sent: Wednesday, September 06, 2006 9:10 AM
  Subject: (瑞星提示-此邮件可能是垃圾邮件)[SIESTA-L] Test error?


  Dear all,

  I just compiled parallel siesta-2.0 on our 64bit 2G Opteron CPU cluster. But 
when I tried to make the examples in the Test folder, the following error 
appeared (e.g. Fe):

   Running fe test...
  == Copying pseudopotential file for Fe...
  == Running SIESTA as ../../../Src/siesta
  MPICH-GM Error: Need to obtain the job magic number in GMPI_MAGIC !
  make: *** [completed] Error 141

  I tried other test examples, the same error also occured!

  I also tried to submit my own calculation test to our cluster with 4 or 8 
CPUs, it can run normally in parallel mode just in the beginning, but hangs 
there without any error or output after about two minites. The last lines in 
the output file are:

  siesta:  Simulation parameters 

  siesta:
  siesta: The following are some of the parameters of the simulation.
  siesta: A complete list of the parameters used, including default values,
  siesta: can be found in file out.fdf
  siesta:
  coor:   Atomic-coordinates input format  = Cartesian coordinates
  coor:  (in Angstroms)

  Could anyone give me some ideas? My arch.make file is as follows:

  # 
  # This file is part of the SIESTA package.
  #
  # Copyright (c) Fundacion General Universidad Autonoma de Madrid:
  # E.Artacho, J.Gale, A.Garcia, J.Junquera, P.Ordejon, D.Sanchez-Portal
  # and J.M.Soler, 1996-2006.
  # 
  # Use of this software constitutes agreement with the full conditions
  # given in the SIESTA license, as signed by all legitimate users.
  #
  SIESTA_ARCH=pgf90-matterhorn-gm
  #
  FC=/share/data/software/mpich-gm/pgi/bin/mpif90
  FC_ASIS=$(FC)
  #
  FFLAGS= -fastsse
  FFLAGS_DEBUG= -g -O0
  RANLIB=echo
  COMP_LIBS=dc_lapack.a
  #
  NETCDF_LIBS= #  /usr/local/netcdf-3.5/lib/pgi/libnetcdf.a
  NETCDF_INTERFACE=#  libnetcdf_f90.a
  DEFS_CDF=#  -DCDF
  #
  MPI_ROOT_MYRINET=/share/data/software/mpich-gm/pgi
  MYRINET_LIBS=-L/opt/gm/lib64 -lgm
  MPI_LIBS_MYRINET= -L$(MPI_ROOT_MYRINET)/lib -lmpich $(MYRINET_LIBS)
  #
  MPI_ROOT_MPICH=//share/data/software/mpich-gm/pgi
  MPI_LIBS_MPICH= -L$(MPI_ROOT_MPICH)/lib -lmpich 
  #
  MPI_ROOT=$(MPI_ROOT_MYRINET)
  MPI_LIBS=$(MPI_LIBS_MYRINET)

  #
  MPI_INTERFACE=libmpi_f90.a
  MPI_INCLUDE=$(MPI_ROOT)/include
  DEFS_MPI=-DMPI
  #

  LIBS= -L/share/data/software/acml270/acml2.7.0/pgi64/lib  -lscalapack \
 -lblacs -lblacsF77init -lblacsCinit -lblacsF77init -lblacs \
 -llapack -lblas \
 $(MPI_LIBS)  $(NETCDF_LIBS)
  SYS=cpu_time
  DEFS= $(DEFS_CDF) $(DEFS_MPI)
  #
  #
  # Important (at least for V5.0-1 of the pgf90 compiler...)
  # Compile atom.f and electrostatic.f without optimization.
  #
  atom.o:
  $(FC) -c $(FFLAGS_DEBUG) atom.f
  #
  electrostatic.o:
  $(FC) -c $(FFLAGS_DEBUG) electrostatic.f
  #
  .F.o:
  $(FC) -c $(FFLAGS) $(INCFLAGS)  $(DEFS) $
  .f.o:
  $(FC) -c $(FFLAGS) $(INCFLAGS)   $
  .F90.o:
  $(FC) -c $(FFLAGS) $(INCFLAGS)  $(DEFS) $
  .f90.o:
  $(FC) -c $(FFLAGS) $(INCFLAGS)   $
  #

  Thank you and best regards.

  Chun



[SIESTA-L] Test error?

2006-09-06 Thread Chun Li
Dear all,

I just compiled parallel siesta-2.0 on our 64bit 2G Opteron CPU cluster. But 
when I tried to make the examples in the Test folder, the following error 
appeared (e.g. Fe):

 Running fe test...
== Copying pseudopotential file for Fe...
== Running SIESTA as ../../../Src/siesta
MPICH-GM Error: Need to obtain the job magic number in GMPI_MAGIC !
make: *** [completed] Error 141

I tried other test examples, the same error also occured!

I also tried to submit my own calculation test to our cluster with 4 or 8 CPUs, 
it can run normally in parallel mode just in the beginning, but hangs there 
without any error or output after about two minites. The last lines in the 
output file are:

siesta:  Simulation parameters 
siesta:
siesta: The following are some of the parameters of the simulation.
siesta: A complete list of the parameters used, including default values,
siesta: can be found in file out.fdf
siesta:
coor:   Atomic-coordinates input format  = Cartesian coordinates
coor:  (in Angstroms)

Could anyone give me some ideas? My arch.make file is as follows:

# 
# This file is part of the SIESTA package.
#
# Copyright (c) Fundacion General Universidad Autonoma de Madrid:
# E.Artacho, J.Gale, A.Garcia, J.Junquera, P.Ordejon, D.Sanchez-Portal
# and J.M.Soler, 1996-2006.
# 
# Use of this software constitutes agreement with the full conditions
# given in the SIESTA license, as signed by all legitimate users.
#
SIESTA_ARCH=pgf90-matterhorn-gm
#
FC=/share/data/software/mpich-gm/pgi/bin/mpif90
FC_ASIS=$(FC)
#
FFLAGS= -fastsse
FFLAGS_DEBUG= -g -O0
RANLIB=echo
COMP_LIBS=dc_lapack.a
#
NETCDF_LIBS= #  /usr/local/netcdf-3.5/lib/pgi/libnetcdf.a
NETCDF_INTERFACE=#  libnetcdf_f90.a
DEFS_CDF=#  -DCDF
#
MPI_ROOT_MYRINET=/share/data/software/mpich-gm/pgi
MYRINET_LIBS=-L/opt/gm/lib64 -lgm
MPI_LIBS_MYRINET= -L$(MPI_ROOT_MYRINET)/lib -lmpich $(MYRINET_LIBS)
#
MPI_ROOT_MPICH=//share/data/software/mpich-gm/pgi
MPI_LIBS_MPICH= -L$(MPI_ROOT_MPICH)/lib -lmpich 
#
MPI_ROOT=$(MPI_ROOT_MYRINET)
MPI_LIBS=$(MPI_LIBS_MYRINET)

#
MPI_INTERFACE=libmpi_f90.a
MPI_INCLUDE=$(MPI_ROOT)/include
DEFS_MPI=-DMPI
#

LIBS= -L/share/data/software/acml270/acml2.7.0/pgi64/lib  -lscalapack \
   -lblacs -lblacsF77init -lblacsCinit -lblacsF77init -lblacs \
   -llapack -lblas \
   $(MPI_LIBS)  $(NETCDF_LIBS)
SYS=cpu_time
DEFS= $(DEFS_CDF) $(DEFS_MPI)
#
#
# Important (at least for V5.0-1 of the pgf90 compiler...)
# Compile atom.f and electrostatic.f without optimization.
#
atom.o:
$(FC) -c $(FFLAGS_DEBUG) atom.f
#
electrostatic.o:
$(FC) -c $(FFLAGS_DEBUG) electrostatic.f
#
.F.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)  $(DEFS) $
.f.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)   $
.F90.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)  $(DEFS) $
.f90.o:
$(FC) -c $(FFLAGS) $(INCFLAGS)   $
#

Thank you and best regards.

Chun



Re: [SIESTA-L] A suggestion to all chinese users

2006-09-03 Thread Chun Li
Dear SIESTA users,

I think the original idea of Dr. Wu is positive, but his expression is short of 
consideration indeed. Dr. Wu is a warm-hearted person, who used to help me for 
using the code ABINIT when I was a beginner. But this time I think Dr. Wu 
should consider netiquette as it stands, but not involve other factors, 
especially the nationality.

In addition, frankly, I am also a beginner of SIESTA. I can imagine that any 
beginner will not paste their questions without any thought. So I hope anyone 
with experience can give beginners more understandings. Of course, we beginers 
should also notice the netiquette to furthest take advantage of the forum 
resource. Anyway, hope all the users will enjoy this forum and learn more. 

Thanks for your attention.

Chun
  - Original Message - 
  From: Mingsu Si 
  To: SIESTA-L@listserv.uam.es 
  Sent: Sunday, September 03, 2006 6:20 PM
  Subject: (瑞星提示-此邮件可能是垃圾邮件)Re: [SIESTA-L] A suggestion to all chinese users


  I am sorry for my question. But there are some thing to be mentioned. In the 
course of calculating the BN graphitic sheet, I design some structures but the 
result is very strange. The pure BN graphitic sheet behaves magnetic moment. So 
I think the structure may be wrong. I want to know the lattice parameters and 
test the results. Howere, I solve this problem by reading some papers now. In 
the begining I think the forum is free for any questions. As a beginner, We 
maybe only put some poor quality questions.  In fact, not all the difficult and 
high quality questions  are able to be sloved by Dr. Wu. I think so. Don't you 
think so? I think the puzzled question which may be simple to you is valueble 
to issue in the forum. Finally, I feel woeful and vague to Dr. Wu's 
suggestions. 

- Original Message - 
From: Wu Rongqin 
To: SIESTA-L@listserv.uam.es 
Sent: Sunday, September 03, 2006 10:24 AM
Subject: [SIESTA-L] A suggestion to all chinese users


Dear all,



I, a Chinese PhD doing first principles calculation and a user of Siesta (I 
have a publication using Siesta), am very glad that in recent years so many 
Chinese students are devoted to first principle calculation. I am also glad 
that Siesta is favored by them (one can see this from questions rised in the 
forum). Many of them feel quite free to discuss problems in this forum, this is 
a good news.

HOWEVER, I WANT GIVE ALL CHINESE USERS OF SIESTA. WHEN YOU WANT TO PASTE 
YOUR PROBLEMS TO THE FORUM, PLEASE GIVE YOUR PROBLEMS A SECOND THOUGHT! 
ACTUALLY YOU MIGHT FIND THAT YOU YOURSELF CAN SOLVE YOUR PROBLEMS AND THUS 
THERE IS NO NEED TO PUT YOUR QUESTIONS TO THIS FORUM.

QUESTIONS ARE WELCOME BUT WHAT AN EVALUATION OTHERS WILL GIVE YOU IF YOUR 
QUESTIONS COME UP WITH POOR QUALITY? HOW WESTERN PEOPLE WILL EVALUTE CHINISE 
STUDENT? JUST THINK OF THIS.







-Original Message-
From: Siesta, Self-Consistent DFT LCAO program, http://www.uam.es/siesta 
[mailto:[EMAIL PROTECTED] On Behalf Of Mingsu Si
Sent: Sunday, September 03, 2006 9:51 AM
To: SIESTA-L@listserv.uam.es
Subject: [SIESTA-L] about graphitic BN nanotube



Dear all,



How to write the lattice parameters of graphitic BN nanotube?






[SIESTA-L] pseudopotential problem?

2006-09-01 Thread Chun Li
Dear all:
 I need help eagerly. I encouter a problem after generating the Pseudopotential 
of Carbon, which makes the calculation can't run rightly. A error [forrtl: 
severe (41): insufficient virtual memory] always arises.My C.tm2.inp is :

  pg C Pseudopotencial
tm2 2.00
   Cca
 0
12
20  2.00
21  2.00
  1.24  1.24   0.0   0.0   0.0
#
#2345678901234567890123456789012345678901234567890  Ruler
Please help me!
Regards
Zhuhua Zhang



Re: [SIESTA-L] alloc_err: allocate status error 4205

2006-08-17 Thread Chun Li
Hi, Joaquin Ortega,

Have you solved this problem? I met the similar problem when I performed 
polarization calculations (108 atoms). But the calculations can be completed 
without problem when smaller systems are studied. Could you give me some ideas? 
Thanks a lot.

Best regards.

Chun

- Original Message - 
From: Joaquin Ortega 
To: SIESTA-L@LISTSERV.UAM.ES
Sent: Monday, December 19, 2005 6:51 PM
Subject: [SIESTA-L] alloc_err: allocate status error 4205


 Hi all.
 
 I am trying to run the ferripyrophyllite (160 atoms, 300 cutoff, etc) in a
 parallel sgi cluster but I receive the next error message for some
 calculations:
 
 alloc_err: allocate status error 4205
 alloc_err: array unknown requested by initatomlists
 alloc_err: dim, lbound, ubound:  1   1   0
 alloc_err: dim, lbound, ubound:  2  48
 alloc_err: allocate status error 4205
 alloc_err: array unknown requested by initatomlists
 alloc_err: dim, lbound, ubound:  1   1   0
 alloc_err: dim, lbound, ubound:  2  96
 alloc_err: allocate status error 4205
 
 Does anyone have a tip or which is the solution about this problem?
 
 Thanks in advance,
 
 Joaquin.
 
 PhD. Joaquin Ortega Castro.
 Estación Experimental del Zaidin (CSIC).
 C/ Profesor Albareda n 1 - CP 18008.
 Granada, Spain
 Telephone: 958 181600 (Ext. 245).