Re: [QE-users] error with qe 6.5
It would be helpful to provide, together with an input, the pseudopotential files or (better) pointers to where they can be found. Anyway: ecutwfc=1.0D-6 looks small, doesn't it? Under which exact conditions do you get the error you mention? Paolo On Fri, Feb 12, 2021 at 1:49 PM José Carlos Conesa Cegarra < jccon...@icp.csic.es> wrote: > Dear all, > > I have found (several times) this error with qe-6.5: > > > > %% > Error in routine allocate_fft (1): > wrong ngm > > > %% > > stopping ... > > The input file is attached. Please help > > -- > José C. Conesa > Research Professor > Instituto de Catálisis y Petroleoquímica, CSIC > Marie Curie, 2; Campus de Cantoblanco > 28028 Madrid (Spain) > Phone +34 915854766 > > > > -- > El software de antivirus Avast ha analizado este correo electrónico en > busca de virus. > https://www.avast.com/antivirus > ___ > Quantum ESPRESSO is supported by MaX (www.max-centre.eu) > users mailing list users@lists.quantum-espresso.org > https://lists.quantum-espresso.org/mailman/listinfo/users -- Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche, Univ. Udine, via delle Scienze 206, 33100 Udine, Italy Phone +39-0432-558216, fax +39-0432-558222 ___ Quantum ESPRESSO is supported by MaX (www.max-centre.eu) users mailing list users@lists.quantum-espresso.org https://lists.quantum-espresso.org/mailman/listinfo/users
[QE-users] error with qe 6.5
Dear all, I have found (several times) this error with qe-6.5: %% Error in routine allocate_fft (1): wrong ngm %% stopping ... The input file is attached. Please help -- José C. Conesa Research Professor Instituto de Catálisis y Petroleoquímica, CSIC Marie Curie, 2; Campus de Cantoblanco 28028 Madrid (Spain) Phone +34 915854766 -- El software de antivirus Avast ha analizado este correo electrónico en busca de virus. https://www.avast.com/antivirus calculation='scf' title='CoGeSnN4_U' restart_mode='from_scratch' outdir='./tmp' etot_conv_thr=1.0D-5 pseudo_dir='../..' / space_group=148, rhombohedral=.TRUE. A=8.6856, B=8.6856, C=8.6856 cosAB=-0.0021014, cosAC=-0.0021014, cosBC=-0.0021014 nat=19, ntyp=4 starting_magnetization(1)=1, nspin=2 ecutwfc=1.0D-6 occupations='tetrahedra_opt' lda_plus_u=.TRUE.,Hubbard_U(1)=0.01,Hubbard_U(2)=0.01 / / ATOMIC_SPECIES Co 59.0 Co_pbe_v1.2.uspp.F.UPF N 14.0 N.pbe.theos.UPF Ge 74.0 Ge.pbe-dn-kjpaw_psl.1.0.0.UPF Sn 120.0 Sn_pbe_v1.uspp.F.UPF ATOMIC_POSITIONS crystal_sg Co 0.00.00.0 N -0.48261 -0.26780 -0.99868 N -0.51739 -0.00132 -0.73220 N -0.98743 -0.98331 -0.77013 N -0.01257 -0.22987 -0.01669 N -0.76645 -0.47954 -0.49898 N -0.23355 -0.50102 -0.52046 N -0.26651 -0.76957 -0.25152 N -0.73349 -0.74848 -0.23043 N -0.51528 -0.73873 -0.01378 N -0.48472 -0.98622 -0.26127 N -0.23402 -0.23402 -0.23402 Ge -0.00192 -0.24449 -0.24922 Ge -0.99808 -0.75078 -0.75551 Ge -0.49517 -0.504830.0 Ge -0.75545 -0.24455 -0.5 Sn -0.37211 -0.87601 -0.87826 Sn -0.24982 -0.75018 -0.5 Sn -0.37214 -0.37214 -0.37214 K_POINTS automatic 6 6 6 0 0 0 ___ Quantum ESPRESSO is supported by MaX (www.max-centre.eu) users mailing list users@lists.quantum-espresso.org https://lists.quantum-espresso.org/mailman/listinfo/users
[QE-users] Regarding calculation of Deformation potential of 3D cubic CH3NH3PbI3 material
Dear Experts/Users I am trying to find the mobility of 3D cubic CH3NH3PbI3 through deformation potential theory using qe. For mobility calculation, one has to find the deformation potential. Can someone please elaborate on how to find the deformation potential of this cubic material? Deformation potential has a "change in energy " term in the theoretical formulae: *deformation potential = change in energy / (change in length/ original length)*. It would be a great help if someone could help or suggest something in order to solve this issue. Thanks in advance Best Regards Sandeep ___ Quantum ESPRESSO is supported by MaX (www.max-centre.eu) users mailing list users@lists.quantum-espresso.org https://lists.quantum-espresso.org/mailman/listinfo/users
Re: [QE-users] Calculations converged in QE - 6.0 version is not converging in QE-6.7max version - Reg
Dear Singaravelan, you problem is input-dependent. Please provide full input, output (with both versions of QE), and pseudopotentials if you want a meaningfull answer. kind regards -- Lorenzo Paulatto - Paris On Feb 12 2021, at 10:10 am, singaravelan T R wrote: > Dear all, > I am working with Bi2Se3 compound and I am performing slab calculation. three > quantum layer calculations using 6.0 version gives the desired result while > performing scf calculations. > Whereas the same calculation(Same input file and same UPF) in version 6.7max > is not converging. On opening the output file , I found that the Harris > Foulkes estimate is not displayed and scf accuracy was very large. Kindly > help me to understand where is the problem. > with thanks, > Singaravelan > Research Scholar > University of Madras. > > ___ > Quantum ESPRESSO is supported by MaX (www.max-centre.eu) > users mailing list users@lists.quantum-espresso.org > https://lists.quantum-espresso.org/mailman/listinfo/users ___ Quantum ESPRESSO is supported by MaX (www.max-centre.eu) users mailing list users@lists.quantum-espresso.org https://lists.quantum-espresso.org/mailman/listinfo/users
[QE-users] Calculations converged in QE - 6.0 version is not converging in QE-6.7max version - Reg
Dear all, I am working with Bi2Se3 compound and I am performing slab calculation. three quantum layer calculations using 6.0 version gives the desired result while performing scf calculations. Whereas the same calculation(Same input file and same UPF) in version 6.7max is not converging. On opening the output file , I found that the Harris Foulkes estimate is not displayed and scf accuracy was very large. Kindly help me to understand where is the problem. with thanks, Singaravelan Research Scholar University of Madras. ___ Quantum ESPRESSO is supported by MaX (www.max-centre.eu) users mailing list users@lists.quantum-espresso.org https://lists.quantum-espresso.org/mailman/listinfo/users
Re: [QE-users] k-points parallelization very slow
Parallelization over k-points does very little communication but it is not as effective as plane-wave parallelization in distributing memory. I also noticed that on a typical multi-core processor the performances of k-point parallelization are often less good than those of plane-wave parallelization and sometimes much less good, for reasons that are not completely clear to me. A factor to be considered is how your machine distributes the pools across the nodes: each of the 4 pools of 32 processors should stay on one of the nodes, but I wouldn't be too sure that this is what is really happening. In your test, there is an anomaly, though: most of the time of "c_bands" (computing the band structure) should be spent in "cegterg" (iterative diagonalization). With 4*8 processors: c_bands : 14153.20s CPU 14557.65s WALL ( 461 calls) Called by c_bands: init_us_2:102.63s CPU105.55s WALL (1952 calls) cegterg : 12700.70s CPU 13083.44s WALL ( 943 calls) only 10% of the time is spent somewhere else, while with 4*32 processors: c_bands : 18068.08s CPU 18219.06s WALL ( 454 calls) Called by c_bands: init_us_2: 26.53s CPU 27.06s WALL (1924 calls) cegterg : 2422.03s CPU 2451.72s WALL 75% of the time is not accounted for. Paolo On Fri, Feb 12, 2021 at 5:01 AM Christoph Wolf wrote: > Dear all, > > I tested k-point parallelization and I wonder if the following results can > be normal or if my cluster has some serious problems... > > the system has 74 atoms and a 2x2x1 k-point grid resulting in 4 k-points > > number of k points= 4 Fermi-Dirac smearing, width (Ry)= 0.0050 >cart. coord. in units 2pi/alat > k(1) = ( 0.000 0.000 0.000), wk = 0.250 > k(2) = ( 0.3535534 -0.3535534 0.000), wk = 0.250 > k(3) = ( 0.000 -0.7071068 0.000), wk = 0.250 > k(4) = ( -0.3535534 -0.3535534 0.000), wk = 0.250 > > > 1) run on 1 node x 32 CPUs with -nk 4 > Parallel version (MPI), running on32 processors > > MPI processes distributed on 1 nodes > K-points division: npool = 4 > R & G space division: proc/nbgrp/npool/nimage = 8 > Fft bands division: nmany = 1 > > PWSCF: 5h42m CPU 6h 3m WALL > > > 2) run on 4 nodes x 32 CPUs with -nk 4 > Parallel version (MPI), running on 128 processors > > MPI processes distributed on 4 nodes > K-points division: npool = 4 > R & G space division: proc/nbgrp/npool/nimage = 32 > Fft bands division: nmany = 1 > > PWSCF: 6h32m CPU 6h36m WALL > > I compiled my pwscf with intel 19 MKL, MPI and OpenMP. If I understood > correctly, -nk parallelization should work well as there is not much > communication between nodes but this does not seem to work for me at all... > detailed timing logs are attached! > > TIA! > Chris > > -- > IBS Center for Quantum Nanoscience > Seoul, South Korea > > ___ > Quantum ESPRESSO is supported by MaX (www.max-centre.eu) > users mailing list users@lists.quantum-espresso.org > https://lists.quantum-espresso.org/mailman/listinfo/users -- Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche, Univ. Udine, via delle Scienze 206, 33100 Udine, Italy Phone +39-0432-558216, fax +39-0432-558222 ___ Quantum ESPRESSO is supported by MaX (www.max-centre.eu) users mailing list users@lists.quantum-espresso.org https://lists.quantum-espresso.org/mailman/listinfo/users