Sure, I attached it to this email.

Best wishes,

Lars


On 30/05/18 15:12, Azadi, Sam wrote:
Lars—

Would you share your make.inc file for other users of QE on ARCHER?
Best, Sam
——
Sam Azadi
Imperial College London

On 30 May 2018, at 14:49, Lars Blumenthal <[email protected] <mailto:[email protected]>> wrote:

This is for future reference: With Paolo's help, I found out that I had to recompile QE. At first I was running the pwscf v.6.1 that was precompiled on ARCHER and in that case the parallelisation over k-points didn't work when using hybrid functionals. However, it does work with the pwscf version (also v.6.1) I compiled myself.

Many thanks to Paolo and best wishes,

Lars
PhD Student
EPSRC Centre for Doctoral Training on Theory and Simulation of Materials
Imperial College London


On 30/05/18 12:36, Paolo Giannozzi wrote:
I made a quick test on a reduced version of your job and found no problems, but the original job requires a larger machine and I have no time to work on it now.

Paolo

On Wed, May 30, 2018 at 11:58 AM, Lars Blumenthal <[email protected] <mailto:[email protected]>> wrote:

    Does anyone have any advice/feedback?

    Many thanks,

    Lars
    PhD Student
    EPSRC Centre for Doctoral Training on Theory and Simulation of
    Materials
    Imperial College London


    On 25/05/18 17:03, Lars Blumenthal wrote:
    Hi everyone,

    I am trying to do scf calculations using the HSE functional
    with PWSCF v.6.1 (svn rev. 13369).

    When I don't use parallelisation over k-points, i.e. when I
    don't specify npools, the calculation runs successfully.
    However, as soon as I try to make use of npools, the
    calculation crashes with:

    DPOTRF exited with INFO= 7
    Error in routine DPOTRF (1):
    Cholesky failed in aceupdate.

    I have attached the corresponding output file. Previously I
    have had the same issue with another compound but in that case
    npools = 2 actually did work and it only crashed with the above
    error when npools > 2. So it's not necessarily that the
    parallelisation with npools doesn't work at all.

    Not using the ACE algorithm makes the calculation painfully
    slow so I'd like to avoid that. Do you have any advice on how
    to optimise the parallelisation of hybrid DFT calculations in
    general?

    Many thanks and best wishes,

    Lars Blumenthal
    PhD Student
    EPSRC Centre for Doctoral Training on Theory and Simulation of
    Materials
    Imperial College London



    _______________________________________________
    users mailing list
    [email protected]
    <mailto:[email protected]>
    https://lists.quantum-espresso.org/mailman/listinfo/users
    <https://lists.quantum-espresso.org/mailman/listinfo/users>


    _______________________________________________
    users mailing list
    [email protected]
    <mailto:[email protected]>
    https://lists.quantum-espresso.org/mailman/listinfo/users
    <https://lists.quantum-espresso.org/mailman/listinfo/users>




--
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222



_______________________________________________
users mailing list
[email protected]
https://lists.quantum-espresso.org/mailman/listinfo/users

_______________________________________________
users mailing list
[email protected] <mailto:[email protected]>
https://lists.quantum-espresso.org/mailman/listinfo/users



_______________________________________________
users mailing list
[email protected]
https://lists.quantum-espresso.org/mailman/listinfo/users

# make.inc.  Generated from make.inc.in by configure.

# compilation rules

.SUFFIXES :
.SUFFIXES : .o .c .f .f90

# most fortran compilers can directly preprocess c-like directives: use
#       $(MPIF90) $(F90FLAGS) -c $<
# if explicit preprocessing by the C preprocessor is needed, use:
#       $(CPP) $(CPPFLAGS) $< -o $*.F90
#       $(MPIF90) $(F90FLAGS) -c $*.F90 -o $*.o
# remember the tabulator in the first column !!!

.f90.o:
        $(MPIF90) $(F90FLAGS) -c $<

# .f.o and .c.o: do not modify

.f.o:
        $(F77) $(FFLAGS) -c $<

.c.o:
        $(CC) $(CFLAGS)  -c $<



# Top QE directory, useful for locating libraries,  linking QE with plugins
# The following syntax should always point to TOPDIR:
TOPDIR = $(dir $(abspath $(filter %make.inc,$(MAKEFILE_LIST))))
# if it doesn't work, uncomment the following line (edit if needed):

# TOPDIR = /work/e547/e547/lars2/Quantum_Espresso/qe_6.1.0

# DFLAGS  = precompilation options (possible arguments to -D and -U)
#           used by the C compiler and preprocessor
# FDFLAGS = as DFLAGS, for the f90 compiler
# See include/defs.h.README for a list of options and their meaning
# With the exception of IBM xlf, FDFLAGS = $(DFLAGS)
# For IBM xlf, FDFLAGS is the same as DFLAGS with separating commas

# MANUAL_DFLAGS  = additional precompilation option(s), if desired
#                  BEWARE: it does not work for IBM xlf! Manually edit FDFLAGS
MANUAL_DFLAGS  =
DFLAGS         =  -D__DFTI -D__MPI -D__SCALAPACK
FDFLAGS        = $(DFLAGS) $(MANUAL_DFLAGS)

# IFLAGS = how to locate directories with *.h or *.f90 file to be included
#          typically -I../include -I/some/other/directory/
#          the latter contains .e.g. files needed by FFT libraries

IFLAGS         = -I$(TOPDIR)/include -I../include/ -I/opt/intel/mkl/include

# MOD_FLAGS = flag used by f90 compiler to locate modules
# Each Makefile defines the list of needed modules in MODFLAGS

MOD_FLAG      = -I

# Compilers: fortran-90, fortran-77, C
# If a parallel compilation is desired, MPIF90 should be a fortran-90
# compiler that produces executables for parallel execution using MPI
# (such as for instance mpif90, mpf90, mpxlf90,...);
# otherwise, an ordinary fortran-90 compiler (f90, g95, xlf90, ifort,...)
# If you have a parallel machine but no suitable candidate for MPIF90,
# try to specify the directory containing "mpif.h" in IFLAGS
# and to specify the location of MPI libraries in MPI_LIBS

MPIF90         = ftn
#F90           = ifort
CC             = cc
F77            = ftn

# C preprocessor and preprocessing flags - for explicit preprocessing,
# if needed (see the compilation rules above)
# preprocessing flags must include DFLAGS and IFLAGS

CPP            = cpp
CPPFLAGS       = -P -traditional $(DFLAGS) $(IFLAGS)

# compiler flags: C, F90, F77
# C flags must include DFLAGS and IFLAGS
# F90 flags must include MODFLAGS, IFLAGS, and FDFLAGS with appropriate syntax

CFLAGS         = -O3 $(DFLAGS) $(IFLAGS)
F90FLAGS       = $(FFLAGS) -nomodule -fpp $(FDFLAGS) $(IFLAGS) $(MODFLAGS)
FFLAGS         = -O2 -assume byterecl -g -traceback

# compiler flags without optimization for fortran-77
# the latter is NEEDED to properly compile dlamch.f, used by lapack

FFLAGS_NOOPT   = -O0 -assume byterecl -g -traceback

# compiler flag needed by some compilers when the main program is not fortran
# Currently used for Yambo

FFLAGS_NOMAIN   = -nofor_main

# Linker, linker-specific flags (if any)
# Typically LD coincides with F90 or MPIF90, LD_LIBS is empty

LD             = ftn
LDFLAGS=-static -I/opt/intel/composer_xe_2013_sp1.4.211/mkl/include/  
-I/opt/intel/composer_xe_2013_sp1.4.211/mkl/include/intel64/lp64/
LD_LIBS        = 

# External Libraries (if any) : blas, lapack, fft, MPI

# If you have nothing better, use the local copy via "--with-netlib" :
# BLAS_LIBS = /your/path/to/espresso/LAPACK/blas.a
# BLAS_LIBS_SWITCH = internal

BLAS_LIBS=/opt/intel/composer_xe_2013_sp1.4.211/mkl/lib/intel64/libmkl_sequential.a
 
/opt/intel/composer_xe_2013_sp1.4.211/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.a
 -Wl,--end-group
BLAS_LIBS_SWITCH = external

# If you have nothing better, use the local copy via "--with-netlib" :
# LAPACK_LIBS = /your/path/to/espresso/LAPACK/lapack.a
# LAPACK_LIBS_SWITCH = internal
# For IBM machines with essl (-D__ESSL): load essl BEFORE lapack !
# remember that LAPACK_LIBS precedes BLAS_LIBS in loading order

LAPACK_LIBS=/opt/intel/composer_xe_2013_sp1.4.211/mkl/lib/intel64/libmkl_intel_lp64.a
 /opt/intel/composer_xe_2013_sp1.4.211/mkl/lib/intel64/libmkl_core.a
LAPACK_LIBS_SWITCH = external

SCALAPACK_LIBS=/opt/intel/composer_xe_2013_sp1.4.211/mkl/lib/intel64/libmkl_scalapack_lp64.a
 -Wl,--start-group

# nothing needed here if the the internal copy of FFTW is compiled
# (needs -D__FFTW in DFLAGS)

FFT_LIBS       = 

# HDF5
HDF5_LIB = 

# For parallel execution, the correct path to MPI libraries must
# be specified in MPI_LIBS (except for IBM if you use mpxlf)

MPI_LIBS       = 

# IBM-specific: MASS libraries, if available and if -D__MASS is defined in 
FDFLAGS

MASS_LIBS      = 

# ar command and flags - for most architectures: AR = ar, ARFLAGS = ruv

AR             = ar
ARFLAGS        = ruv

# ranlib command. If ranlib is not needed (it isn't in most cases) use
# RANLIB = echo

RANLIB         = ranlib

# all internal and external libraries - do not modify

FLIB_TARGETS   = all

LIBOBJS        = $(TOPDIR)/clib/clib.a $(TOPDIR)/iotk/src/libiotk.a
LIBS           = $(SCALAPACK_LIBS) $(LAPACK_LIBS) $(FFT_LIBS) $(BLAS_LIBS) 
$(MPI_LIBS) $(MASS_LIBS) $(HDF5_LIB) $(LD_LIBS)

# wget or curl - useful to download from network
WGET = wget -O

# Install directory - not currently used
PREFIX = /work/e547/e547/lars2/Quantum_Espresso/qe_6.1.0
_______________________________________________
users mailing list
[email protected]
https://lists.quantum-espresso.org/mailman/listinfo/users

Reply via email to