Ok... Then the problem might be with the environment or something. I will contact my HPC admin.
Thank you so much Dr. Andrea. On Mon, Nov 13, 2017 at 5:06 PM, Andrea Ferretti <[email protected] > wrote: > > Dear Bhushan, > > I've re-run your scf+nscf using both qe-6.0 and 6.2 and both result in the > same amount of disk usage (about 1GB for scf and 23 GB for nscf), which > sounds reasonable given the size of the problem you are running (you have > relatively large cell*nkpt parameters) > > take care > Andrea > > > The input scripts in support of the query raised in earlier reply are >> attached here. >> >> On Thu, Nov 9, 2017 at 10:37 PM, B S Bhushan <[email protected]> >> wrote: >> Dear Dr. Andea Thank you so much. >> >> Dear Dr. Lorenzo, YES I read your mail, and I replied with thanks as well. >> >> I am facing a new problem with QE-6.2 on my supercomputer account. >> >> I had run a SCF for doped graphene structure with 48 atoms, and then >> tried to run NSCF calculation (I wanted to >> calculate DOS ultimately). >> The NSCF calculation stopped in between because the calculation has >> already consumed 80 GB and the disk ran out of >> memory. >> I am a bit surprised... because I have never seen an NSCF calculation >> taking too much of memory before. >> I am not getting why this case has happened. >> >> Can any of u plz suggest. >> >> I am very thankful for your precious time and knowledge. >> >> >> Sincerely, >> B S Bhushan >> ABV-IIITM Gwalior, India. >> >> >> >> On Tue, Nov 7, 2017 at 4:10 PM, Lorenzo Paulatto <[email protected]> >> wrote: >> Dear BS, >> Did you read my email? Was it not clear at some point? >> >> Kind regards >> >> -- >> Lorenzo Paulatto >> Written on a virtual keyboard with real fingers >> >> On 7 Nov 2017 10:37 a.m., "B S Bhushan" <[email protected]> wrote: >> Seems like the dos.x of QE-6.2 can not find xml data from NSCF >> produced using QE-6.1. >> The following error was shown, >> >> ------------------------------------------------------------ >> ----------------------------------------------------------------- >> ------------------------------------------------------- >> Program DOS v.6.2 (svn rev. 13949:13950) starts on 7Nov2017 at 13:22:50 >> >> This program is part of the open-source Quantum ESPRESSO suite >> for quantum simulation of materials; please cite >> "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009); >> "P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017); >> URL http://www.quantum-espresso.org", >> in publications or presentations arising from this work. More >> details at >> http://www.quantum-espresso.org/quote >> >> Parallel version (MPI), running on 1 processors >> >> MPI processes distributed on 1 nodes >> >> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >> %%%%%%%%%%%%%%%%%%% >> Error in routine pw_readschemafile (1): >> xml data file not found >> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >> %%%%%%%%%%%%%%%%%%% >> >> stopping ... >> ------------------------------------------------------------ >> -------------- >> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD >> with errorcode 1. >> >> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >> You may or may not see output from other processes, depending on >> exactly when Open MPI kills them. >> ------------------------------------------------------------ >> -------------- >> ------------------------------------------------------------ >> ----------------------------------------------------------------- >> ------------------------------------------------------------------ >> >> >> should I do the VC-relax and NSCF again using QE-6.2 ?????????????? >> >> Please Suggest... >> >> Thank you very much for your kind support. >> >> Sincerely, >> B S Bhushan >> Ph.D Scholar, >> ABV-IIITM Gwalior, India. >> >> >> On Tue, Nov 7, 2017 at 11:41 AM, B S Bhushan <[email protected]> >> wrote: >> Thank you very very much Dr. Andrea. >> I have a question sir... >> If I install the QE-6.2 on my supercomputer and run dos.x directly on the >> NSCF outputs produced using >> QE-6.1 will it work properly. >> Or I have to do VC-relax and NSCF again using QE-6.2. >> >> Please suggest... Your answer will save a lot of time for time. >> >> Thank you so so much for your precious time and knowledge. >> >> Sincerely, >> B S Bhushan >> >> On Tue, Nov 7, 2017 at 3:45 AM, Vahid Askarpour <[email protected]> wrote: >> I think the input to dos.x (I call it dos.in) looks like this: >> &DOS >> outdir='./' >> prefix=‘graphene' >> fildos=‘graphene.dos', >> Emin=-10.0, Emax=16, DeltaE=0.002 >> / >> >> You run dos.x after the nscf run. I think the nscf.in should contain the >> relaxed structure. >> >> For the definition of Emin, Emax and DeltaE, see the online dos.x manual. >> >> >> Cheers, >> >> Vahid >> >> Vahid Askarpour >> Department of Physics and Atmospheric Science >> Dalhousie University, >> Halifax, NS, Canada >> >> On Nov 6, 2017, at 4:19 PM, B S Bhushan <[email protected]> >> wrote: >> >> Dear Experts, >> I am trying to extract the DOS profiles for some graphene systems using >> Supercomputing >> facility. >> >> First, I performed VC-relax and then NSCF (I have not manually updated >> the relaxed >> coordinates in the nscf input file, since nscf automatically reads them >> from previous scf >> run). Then I tried to execute dos.x, however I am getting segmentation >> fault error as >> shown below. >> >> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >> Program DOS v.6.1 (svn rev. 13369) starts on 6Nov2017 at 23:32:32 >> >> This program is part of the open-source Quantum ESPRESSO suite >> for quantum simulation of materials; please cite >> "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009); >> URL http://www.quantum-espresso.org", >> in publications or presentations arising from this work. More >> details at >> http://www.quantum-espresso.org/quote >> >> Parallel version (MPI), running on 16 processors >> R & G space division: proc/nbgrp/npool/nimage = 16 >> >> Info: using nr1, nr2, nr3 values from input >> >> Info: using nr1, nr2, nr3 values from input >> forrtl: severe (174): SIGSEGV, segmentation fault occurred >> Image PC Routine Line Source >> dos.x 000000000073A4B1 qexml_module_mp_q 3753 >> qexml.f90 >> dos.x 000000000055AD27 pw_restart_mp_rea 2101 >> pw_restart.f90 >> dos.x 00000000005579E4 pw_restart_mp_pw_ 1057 >> pw_restart.f90 >> dos.x 000000000040A828 read_xml_file_ 240 >> read_file.f90 >> dos.x 0000000000406331 MAIN__ 95 >> dos.f90 >> dos.x 000000000040621C Unknown Unknown >> Unknown >> libc.so.6 0000003FF541ECDD Unknown Unknown >> Unknown >> dos.x 0000000000406119 Unknown Unknown >> Unknown >> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >> >> >> I am not getting any error, if I run dos.x directly after vc-relax. >> However, If I run >> dos.x after NSCF the error appears. >> >> The input script for vc-relax, nscf are attached with the mail here. >> >> I thank you very much for your precious time and knowledge. >> >> >> Sincerely, >> B S Bhushan >> Ph.D Scholar, >> ABV-IIITM Gwalior, India. >> >> >> <graphene.in><graphene_nscf.in>_____________________________ >> __________________ >> Pw_forum mailing list >> [email protected] >> http://pwscf.org/mailman/listinfo/pw_forum >> >> >> >> _______________________________________________ >> Pw_forum mailing list >> [email protected] >> http://pwscf.org/mailman/listinfo/pw_forum >> >> >> >> >> _______________________________________________ >> Pw_forum mailing list >> [email protected] >> http://pwscf.org/mailman/listinfo/pw_forum >> >> >> >> _______________________________________________ >> Pw_forum mailing list >> [email protected] >> http://pwscf.org/mailman/listinfo/pw_forum >> >> >> >> >> >> > -- > Andrea Ferretti, PhD > S3 Center, Istituto Nanoscienze, CNR > via Campi 213/A, 41125, Modena, Italy > Tel: +39 059 2055322; Skype: andrea_ferretti > URL: http://www.nano.cnr.it > > _______________________________________________ > Pw_forum mailing list > [email protected] > http://pwscf.org/mailman/listinfo/pw_forum >
_______________________________________________ Pw_forum mailing list [email protected] http://pwscf.org/mailman/listinfo/pw_forum
