Hi, from your problem below I see that you are using limkl_avx in your
linking.
If you use gfortran + mkl can you check that you have in your make.sys
the following
BLAS_LIB = -Wl,--no-as-needed -L${MKLROOT}/lib/intel64 -lmkl_gf_lp64
-lmkl_core -lmkl_sequential -lpthread -lm
If this still don't work you might try to use internal blas and lapack
as Paolo suggested.
HTH.
Nicola
On 28/10/15 10:36, Paolo Giannozzi wrote:
Please see here:
http://www.quantum-espresso.org/faq/frequent-errors-during-execution/#5.2
Since zdotc is the function that segfaults, this (from the user guide)
could be relevant:
2.7.6.3 Linux PCs with gfortran
"There is a known incompatibility problem between the calling
convention for Fortran functions that return complex values: there is
the convention used by g77/f2c, where in practice the compiler
converts such functions to subroutines with a further parameter for
the return value; gfortran instead produces a normal function
returning a complex value. If your system libraries were compiled
using g77 (which may happen for system-provided libraries in
not-too-recent Linux distributions), and you instead use gfortran to
compile QUANTUM ESPRESSO, your code may crash or produce random
results. This typically happens during calls to zdotc, which is one
the most commonly used complex-returning functions of BLAS+LAPACK.
For further details see for instance this link:
http://www.macresearch.org/lapackblas-fortran-106#comment-17071
or read the man page of gfortran under the flag -ff2c.
If your code crashes during a call to zdotc, try to recompile QUANTUM
ESPRESSO using the internal BLAS and LAPACK routines (using the
-with-internal-blas and -with-internal-lapack parameters of the
configure script) to see if the problem disappears; or, add the -ff2c
flag" (info by Giovanni Pizzi, Jan. 2013).
Note that a similar problem with complex functions exists with MKL
libraries as well: if you compile with gfortran, link -lmkl_gf_lp64,
not -lmkl_intel_lp64, and the like for other architectures. Since
v.5.1, you may use the following workaround: add preprocessing option
-Dzdotc=zdotc_wrapper to DFLAGS.
On Tue, Oct 27, 2015 at 10:27 PM, Pulkit Garg <[email protected]
<mailto:[email protected]>> wrote:
[sparky-32:92490] *** Process received signal ***
[sparky-32:92490] Signal: Segmentation fault (11)
[sparky-32:92490] Signal code: (128)
[sparky-32:92490] Failing at address: (nil)
[sparky-32:92490] [ 0]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) [0x2ac8b2aa7cb0]
[sparky-32:92490] [ 1]
/opt/intel/composer_xe_2013.3.163/mkl/lib/intel64/libmkl_avx.so(mkl_blas_avx_zdotc+0xe0)
[0x2ac8c4de7da0]
[sparky-32:92490] [ 2]
/opt/intel/composer_xe_2013.3.163/mkl/lib/intel64/libmkl_gf_lp64.so(zdotc_gf+0x2e)
[0x2ac8b007a56e]
[sparky-32:92490] [ 3]
/opt/intel/composer_xe_2013.3.163/mkl/lib/intel64/libmkl_gf_lp64.so(zdotc+0x26)
[0x2ac8b007a87e]
[sparky-32:92490] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 12 with PID 92490 on node
sparky-32 exited on signal 11 (Segmentation fault).
I am able to run QE for my structure with 4 atoms and also when my
structure has 50 atoms. But when I run QE for bigger structure
(108 atoms) I am getting the above error. People have posted
similar errors but I am not sure what to do to fix this.
Pulkit Garg
_______________________________________________
Pw_forum mailing list
[email protected] <mailto:[email protected]>
http://pwscf.org/mailman/listinfo/pw_forum
--
Paolo Giannozzi, Dept. Chemistry&Physics&Environment,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216 <tel:%2B39-0432-558216>, fax +39-0432-558222
<tel:%2B39-0432-558222>
_______________________________________________
Pw_forum mailing list
[email protected]
http://pwscf.org/mailman/listinfo/pw_forum
--
Nicola Varini, PhD
Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
CE 0 813 (Bâtiment CE)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch
Nicola Varini
_______________________________________________
Pw_forum mailing list
[email protected]
http://pwscf.org/mailman/listinfo/pw_forum