I have been trying to run a hybrid PBE0 calculation in parallel and have run 
into a recurring problem in which the calculation stops after scf convergence 
is first acheived - the calculation stops and doesn't go on to refine the 
calculation with exx. I have tried to run the calculation with between 24 and 
192 processors because we previously  thought memory requirements were to blame 
for the error, but this didn't seem to help.

''TACC: Setting up parallel environment for MVAPICH2+mpispawn.
TACC: Starting parallel tasks...

     Program PWSCF v.5.2.0 '


'Parallel version (MPI), running on   192 processors
 R & G space division:  proc/nbgrp/npool/nimage =     192'


'highest occupied, lowest unoccupied level (ev):     6.3409    6.0501

convergence has been achieved in  16 iterations

TACC: MPI job exited with code: 1

TACC: Shutdown complete. Exiting.'


this error produces a CRASH file that reports an error reading an xml-data file 
and upon inspection,the .save folder produced contains the charge-density.dat 
file but is missing the .UPF files, the data-file.xml file, and 
data-file.xml.eig file. Furthermore, while the .igk files have content, all the 
.wfc files produced by the job are empty (0 bytes). My impression is that this 
missing content is what is keeping the calculation from being able to proceed 
but I don't know how it might be addressed.

attached is the input file for my calculation.  Attached are the input and 
output files resulting from one such run. I did notice an additional message in 
the output

'     Message from routine sym_rho_init:
     likely internal error: no G-vectors found'

but the calculation seems to proceed despite it...

I appreciate any insights you might have regarding these complications I am 
facing

Thank you for your help,
Daniel Dumett Torres
Graduate Student at the University of Illinois at Urbana-Champaign

Attachment: Cu4Cd2Se4_PBE0.in
Description: Cu4Cd2Se4_PBE0.in

Attachment: Cu4Cd2Se4_PBE0.out
Description: Cu4Cd2Se4_PBE0.out

_______________________________________________
Pw_forum mailing list
[email protected]
http://pwscf.org/mailman/listinfo/pw_forum

Reply via email to