Dear all,

since I have noticed the attachments might have been removed, I repeat the message and report the text files at the end of the message instead of as attachments. So sorry for the duplicate.

I am experiencing crashes of fs.x when using versions 6.3, 6.4, 6.5 and 6.6 either. I'm running Quantum Espresso on the supercomputing cluster "galileo" at CINECA, Italy, equipped with Intel Xeon E5-2697 v4 (Broadwell) nodes, 1022 36-core compute nodes, each one contains 2 18-cores Intel Xeon E5-2697 v4 (Broadwell) at 2.30 GHz.


I do as follows, all the input files and job submission scripts are reported at the end of themessage:


 1) run a scf calculation

 2) run a nscf calculation

 3) run fs.x


 at step 3) QE crashes and I get in the .out file

     Error in routine fill_fs_grid (1):
     cannot locate  k point


 and in the CRASH file

     task #        19
     from fill_fs_grid : error #         1
     cannot locate  k point



Please, note that I asked to colleagues of mine at the University of Warwick, where they use QE on a HPC infrastructure (Tinis) with Lenovo NeXtScale nx360 M5 servers with 2 x Intel Xeon E5-2630 v3 2.4 GHz (Haswell) 8-core processors, to run QE with the very same input files, and they
 told me that everything run smoothly with NO crash...


So, I suppose I'm doing a silly error in the submission script, or, as suggested by my colleagues at Warwick, there is a compiling issue on the CINECA machine I'm using.


Can you support me in understanding where or what I have to look at, in order to deal with this issue, please?



BTW in step 2) I also used occupations = 'tetrahedra' or 'fixed', with the same crash for fs.x


 I thank you in advance for your help ,


 Patrizio



1) scf calculation

 input file

 &control
    prefix='Mg3Sb2',
    pseudo_dir = './',
    outdir='./'
    wf_collect=.true.
    etot_conv_thr = 1.0d-8,
/
 &system
    ibrav=  4,
    celldm(1) = 8.6767152,
    celldm(3) = 1.5818114,
    nat=  5,
    ntyp= 2,
    ecutwfc = 100.0,
 /
 &electrons
    conv_thr = 1.0d-8,
    mixing_beta = 0.7,
 /
ATOMIC_SPECIES
 Mg  24.305  Mg.upf
 Sb  121.76  Sb.upf
ATOMIC_POSITIONS crystal
 Mg 0      0      0
 Mg 1/3 2/3 0.3679670359
 Mg 2/3 1/3 0.6320329641
 Sb 1/3 2/3 0.7745210172
 Sb 2/3 1/3 0.2254789828
K_POINTS automatic
   13 13 9 0 0 0



 submission script

#!/bin/bash
#SBATCH --time=24:00:00        # Walltime in hh:mm:ss
#SBATCH --nodes=4              # Number of nodes
#SBATCH --ntasks-per-node=6   # Number of MPI ranks per node
#SBATCH --cpus-per-task=6 # Number of OpenMP threads for each MPI process/rank
#SBATCH --mem=118000           # Per nodes memory request (MB)
#SBATCH --account=IscrC_MEETH
#SBATCH --job-name=mg3sb2
#SBATCH --partition=gll_usr_prod

module purge
module load profile/phys
module load autoload qe/6.6

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export MKL_NUM_THREADS=${OMP_NUM_THREADS}

mpirun pw.x -npool 24 -input scf.in > Mg3Sb2.out



2) nscf calculation

 input file

 &control
    prefix='Mg3Sb2',
    pseudo_dir = './',
    outdir='./'
    wf_collect=.true.
    etot_conv_thr = 1.0d-8,
    calculation = 'nscf',
/
 &system
    ibrav=  4,
    celldm(1) = 8.6767152,
    celldm(3) = 1.5818114,
    nat=  5,
    ntyp= 2,
    ecutwfc = 100.0,
 /
 &electrons
    conv_thr = 1.0d-8,
    mixing_beta = 0.7,
 /
ATOMIC_SPECIES
 Mg  24.305  Mg.upf
 Sb  121.76  Sb.upf
ATOMIC_POSITIONS crystal
 Mg 0      0      0
 Mg 1/3 2/3 0.3679670359
 Mg 2/3 1/3 0.6320329641
 Sb 1/3 2/3 0.7745210172
 Sb 2/3 1/3 0.2254789828
K_POINTS automatic
   61 61 41 0 0 0


 submission script, as above but last line

mpirun pw.x -npool 24 -input nscf.in > Mg3Sb2_nscf.out


3) FS calculation

 input file

&fermi
        outdir = './',
        prefix = 'Mg3Sb2',
        DeltaE = 3
/


 submission script, as above but last line

mpirun fs.x -npool 24 -input fs.in > Mg3Sb2_fermi.out















_______________________________________________
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list [email protected]
https://lists.quantum-espresso.org/mailman/listinfo/users

Reply via email to