You are mixing two different ways to indicate cell parameters
///---
EITHER:
+--------------------------------------------------------------------
Variable: celldm(i), i=1,6
Type: REAL
See: ibrav
Description: Crystallographic constants - see the "ibrav" variable.
Specify either these OR
"A","B","C","cosAB","cosBC","cosAC" NOT both.
Only needed values (depending on "ibrav") must
be specified
alat = "celldm"(1) is the lattice parameter "a"
(in BOHR)
If "ibrav"==0, only "celldm"(1) is used if present;
cell vectors are read from card "CELL_PARAMETERS"
+--------------------------------------------------------------------
OR:
+--------------------------------------------------------------------
Variables: A, B, C, cosAB, cosAC, cosBC
Type: REAL
See: ibrav
Description: Traditional crystallographic constants:
a,b,c in ANGSTROM
cosAB = cosine of the angle between axis a
and b (gamma)
cosAC = cosine of the angle between axis a
and c (beta)
cosBC = cosine of the angle between axis b
and c (alpha)
The axis are chosen according to the value of
@ref ibrav.
Specify either these OR @ref celldm but NOT both.
Only needed values (depending on @ref ibrav)
must be specified.
The lattice parameter alat = A (in ANGSTROM ).
If @ref ibrav == 0, only A is used if present, and
cell vectors are read from card @ref CELL_PARAMETERS.
+--------------------------------------------------------------------
\\\---
This might be the cause of the strange behavior, supposing that your
machine has the ~4GB of free RAM to perform the calculation indicated
in the output. However, in the case of a regular hcp supercell you
should not need at all to indicate the cosAB and cosAC values.
HTH
Giuseppe
Quoting "H1@GMAIL" <[email protected]>:
Hi Giuseppe
apologies. My input file:
&control
calculation = 'scf'
prefix = 'graphene'
pseudo_dir = '/gscratch/hwahab/DFT-code/psp/'
outdir = './'
restart_mode = 'from_scratch'
etot_conv_thr = 1.d-6
forc_conv_thr = 1.d-5
/
&system
ibrav = 4
celldm(1) = 9.84
celldm(3) = 10
cosAB = -0.5
cosAC = 1
nat = 32
ntyp = 1
ecutwfc = 80
occupations = 'smearing'
smearing = 'gaussian'
degauss = 0.1
vdw_corr='grimme-d2'
/
&electrons
diagonalization = 'david'
diago_thr_init = 1.d-4
mixing_mode = 'local-TF'
mixing_beta = 0.7
conv_thr = 1.d-10
/
&ions
/
ATOMIC_SPECIES
C 12.0107 C.pbe-mt_gipaw.UPF
ATOMIC_POSITIONS crystal
C 0.16667 0.08333 0.00000
C 0.41667 0.08333 0.00000
C 0.66667 0.08333 0.00000
C 0.91667 0.08333 0.00000
C 0.08333 0.16667 0.00000
C 0.33333 0.16667 0.00000
C 0.58333 0.16667 0.00000
C 0.83333 0.16667 0.00000
C 0.16667 0.33333 0.00000
C 0.41667 0.33333 0.00000
C 0.66667 0.33333 0.00000
C 0.91667 0.33333 0.00000
C 0.08333 0.41667 0.00000
C 0.33333 0.41667 0.00000
C 0.58333 0.41667 0.00000
C 0.83333 0.41667 0.00000
C 0.16667 0.58333 0.00000
C 0.41667 0.58333 0.00000
C 0.66667 0.58333 0.00000
C 0.91667 0.58333 0.00000
C 0.08333 0.66667 0.00000
C 0.33333 0.66667 0.00000
C 0.58333 0.66667 0.00000
C 0.83333 0.66667 0.00000
C 0.16667 0.83333 0.00000
C 0.41667 0.83333 0.00000
C 0.66667 0.83333 0.00000
C 0.91667 0.83333 0.00000
C 0.08333 0.91667 0.00000
C 0.33333 0.91667 0.00000
C 0.58333 0.91667 0.00000
C 0.83333 0.91667 0.00000
K_POINTS automatic
8 8 1 0 0 0
And the end snippet of the output:
Estimated max dynamical RAM per process > 3558.76MB
Initial potential from superposition of free atoms
starting charge 111.99996, renormalised to 128.00000
negative rho (up, down): 5.479E-05 0.000E+00
Starting wfc are 256 randomized atomic wfcs
There is no error outputs, it just gets stuck there..
Hope this makes sense.
Hud Wahab
University of Wyoming
1000 E University Ave
Laramie WY, 82072
Email: [email protected]
On 5/9/2019 12:50:50 PM, Giuseppe Mattioli
<[email protected]> wrote:
Dear Hud (please sign always with full name and scientific affiliation
the posts to this forum, we appreciate it)
It is impossible to help you if you don't post the input of your
calculation and the relevant part of your output (where does the code
stop?). Is there any system error like a segfault printed, e.g., in a
nohup.out file? It is primarily important to look into such kind of
information, in order to see if the error is reproducible on different
machines/architectures or by using different compilers/libraries.
HTH
Giuseppe
Quoting "H1@GMAIL" :
Hello
I run on 6.1-serial. My pw.x scf runs ok with small size systems 2
atoms, but nothing happens when scaled to 4x4 or 32 atoms. I let it
run for more than an hour and don't expect that the calculation
takes that long.
From the troubleshooting in User Guide I see it might be a
floating-point error causing endless NaNs - how to handle for such
exceptions?
As I can't provide the error output, I am not sure what details you
need to troubleshoot, but let me know if something is missing
Cheers, Hud
Dept. Chemical Engineering
University of Wyoming
GIUSEPPE MATTIOLI
CNR - ISTITUTO DI STRUTTURA DELLA MATERIA
Via Salaria Km 29,300 - C.P. 10
I-00015 - Monterotondo Scalo (RM)
Mob (*preferred*) +39 373 7305625
Tel + 39 06 90672342 - Fax +39 06 90672316
E-mail:
_______________________________________________
Quantum Espresso is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list [email protected]
https://lists.quantum-espresso.org/mailman/listinfo/users
GIUSEPPE MATTIOLI
CNR - ISTITUTO DI STRUTTURA DELLA MATERIA
Via Salaria Km 29,300 - C.P. 10
I-00015 - Monterotondo Scalo (RM)
Mob (*preferred*) +39 373 7305625
Tel + 39 06 90672342 - Fax +39 06 90672316
E-mail: <[email protected]>
_______________________________________________
Quantum Espresso is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list [email protected]
https://lists.quantum-espresso.org/mailman/listinfo/users