[QE-users] Fail in drawing fat band

2024-04-19 Thread wangzongyi via users
Dear all
I am using DFTtoolbox to draw fat band for Ni3SiTe8. However, I am puzzled. 
The crystral structure is obtained from experiment, so I omit the procedure of 
doing structure relax. Than I submit the commands one by one 
srun -n 128 pw.x scf.out
srun -n 128 pw.x bands.out
srun -n 1 bands.x pp.bands.out  (Nb3SiTe8_bands.dat 
Nb3SiTe8_bands.dat.gnu Nb3SiTe8_bands.dat.rap is obtained after this step while 
Nb3SiTe8_bands.dat.rap is an empty file)
srun -n 2 projwfc.x projwfc.out   (Nb3SiTe8_proj.dat.projwfc_up is 
obtained after this step)
After that, I used the code provided by DFTtoolbox to draw my fatband plot. I 
used the postproc.py file given in the webside 
DFTtoolbox/build/lib/DFTtoolbox/postproc.py at master · pipidog/DFTtoolbox · 
GitHub I haven't made any change to the code. I changed some parameter of the 
file qe_pp.py this is my code (Nb3SiTe8_proj.dat.projwfc_up is the file I 
obtained in the fouth step)
import postproc
import os
# Parameter 
run_task=[1,2,3,4]
wkdir=os.path.dirname(os.path.realpath(Nb3SiTe8_proj.dat.projwfc_up))
# band_read & fatband_read
Ef=8.65
#band_plot
kdiv=[15,7,5,15,13,9,5,10,9,1]
klabel=['$\Gamma$','X','W','K','$\Gamma$','L','U','W','L','K']
Ebound=[-5,5]
#fatband_plot
state_grp=[['1:1/2/a/a'],['2:2/1/a/a']]
# Main 
pp=postproc(wkdir)
for task in run_task:
if task==1: #'band_read':
pp.band_read(Ef=Ef,bandfile='pw.bands.out')
elif task==2: #'band_plot':
pp.band_plot(kdiv=kdiv,klabel=klabel,Ebound=Ebound)
elif task==3: #'fatband_read':
pp.fatband_read(Ef=Ef,projout='projwfc.fat.out',projprefix='fatband')
elif task==4: #'fatband_plot':

pp.fatband_plot(state_grp=state_grp,kdiv=kdiv,klabel=klabel,Ebound=Ebound)
elif task==5: #pdos_read:
pp.pdos_read(Ef=Ef)
elif task==6:
pp.pdos_plot(state_grp=state_grp,Ebound=Ebound)


However, the fatband haven't been draw. I only obtained another python file 
postproc.cpython-37.pyc what's wrong? Where should I change? Or what should I 
write to obtain the fatband plot?
Could you please help me? Thank you very much!


Zongyi Wang ___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] [SPAM] error with parallel execution

2024-04-19 Thread 孟令时
Dear, dedevelopers and subscribers.
I'm Jerry from Peking University.
I configure by option ./configure
and make by option make all.
After that I run PW/examples/example01 (by option ./run_example) succesfully.
and when I use the input file this example (al.scf.david.in) (by option pw.x 
scf.out). I run it successfully.
and when I use the input file this example (al.scf.david.in) (by option mpirun 
-np 1 pw.x scf.out). I run it successfully.


The problem occurs when I run it (by option mpirun -np 2 pw.x scf.out). I get a write_line error.


So I don't know what happened with my mpi parallel execution.


I will attach my files here. Including config.log configure.msg and scf.out

--
Failed to register memory region (MR):

Hostname: wm1-login01
Address:  98769000
Length:   4194304
Error:Cannot allocate memory
--
--
Open MPI has detected that there are UD-capable Verbs devices on your
system, but none of them were able to be setup properly.  This may
indicate a problem on this system.

You job will continue, but Open MPI will ignore the "ud" oob component
in this run.

Hostname: wm1-login01
--
from test_input_xml: input file not opened or empty
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1
:
system msg for write_line failure : Bad file descriptor
forrtl: error (78): process killed (SIGTERM)
Image  PCRoutineLineSource  
   
pw.x   01001F1E  Unknown   Unknown  Unknown
libpthread-2.17.s  7FA56F8AC5F0  Unknown   Unknown  Unknown
libc-2.17.so   7FA56F2C1D37  getrusage Unknown  Unknown
pw.x   00F26E5D  Unknown   Unknown  Unknown
pw.x   00F26BFB  Unknown   Unknown  Unknown
pw.x   00E34EFF  Unknown   Unknown  Unknown
pw.x   00A1AA18  Unknown   Unknown  Unknown
pw.x   008319ED  Unknown   Unknown  Unknown
pw.x   007ACAAB  Unknown   Unknown  Unknown
pw.x   007ABE7A  Unknown   Unknown  Unknown
pw.x   0086C4B4  Unknown   Unknown  Unknown
pw.x   0069C6FF  Unknown   Unknown  Unknown
pw.x   0069ADB1  Unknown   Unknown  Unknown
pw.x   00699427  Unknown   Unknown  Unknown
pw.x   0040D0AA  Unknown   Unknown  Unknown
pw.x   0040A3ED  Unknown   Unknown  Unknown
pw.x   0057E9AE  Unknown   Unknown  Unknown
pw.x   004074DF  Unknown   Unknown  Unknown
pw.x   0040735E  Unknown   Unknown  Unknown
libc-2.17.so   7FA56F1EF505  __libc_start_main Unknown  Unknown
pw.x   00407269  Unknown   Unknown  Unknown
forrtl: error (78): process killed (SIGTERM)
Image  PCRoutineLineSource  
   
pw.x   01001F1E  Unknown   Unknown  Unknown
libpthread-2.17.s  7F6443BBF5F0  Unknown   Unknown  Unknown
libpthread-2.17.s  7F6443BBE69E  write Unknown  Unknown
pw.x   01018835  Unknown   Unknown  Unknown
pw.x   0101970E  Unknown   Unknown  Unknown
pw.x   01057D2E  Unknown   Unknown  Unknown
pw.x   00535DA9  Unknown   Unknown  Unknown
pw.x   00536AF6  Unknown   Unknown  Unknown
pw.x   00412300  Unknown   Unknown  Unknown
pw.x   0040A3ED  Unknown   Unknown  Unknown
pw.x   0057E9AE  Unknown   Unknown  Unknown
pw.x   004074DF  Unknown   Unknown  Unknown
pw.x   0040735E  Unknown   Unknown  Unknown
libc-2.17.so   7F6443502505  __libc_start_main Unknown  Unknown
pw.x   00407269  Unknown   Unknown  Unknown
forrtl: error (78): process killed (SIGTERM)
Image  PCRoutineLineSource  
   
pw.x   01001F1E  Unknown   Unknown  Unknown
libpthread-2.17.s  7FDAD78995F0  Unknown   Unknown  Unknown
libmkl_avx2.so 7FDAD4258B23  Unknown   Unknown  Unknown

[QE-users] A strange error when using GPU accelerated ph.x

2024-04-19 Thread lq1998 via users
Dear developers and users,


I tried to run GPU version of QE for electron phonon coupling calculation on an 
a100 card. The structure relaxation and self-consistent calculation are 
successful. However, when I did phonon calculation, my job crashed with a 
strange error:


##
[m005:65520:0:65520] Caught signal 11 (Segmentation fault: address not mapped 
to object at address 0xfffc)


/fs08/home/js_luqing/src/qe-7.2/PHonon/PH/phq_setup.f90: [ phq_setup_() ]
   ...
   322 !  nat_todo, atomo, 
comp_irr
   323
   324 DO irr=0,nirr
== 325   
comp_irr(irr)=comp_irr_iq(irr,current_iq)
   326   IF (elph .AND. irr0) 
comp_elph(irr)=comp_irr(irr)
   327 ENDDO
   328 !


 backtrace (tid: 65520) 
0 0x004a2780 phq_setup_() 
/fs08/home/js_luqing/src/qe-7.2/PHonon/PH/phq_setup.f90:325
1 0x004700e1 initialize_ph_() 
/fs08/home/js_luqing/src/qe-7.2/PHonon/PH/initialize_ph.f90:79
2 0x0041a811 do_phonon_() 
/fs08/home/js_luqing/src/qe-7.2/PHonon/PH/do_phonon.f90:100
3 0x00413d25 MAIN_() 
/fs08/home/js_luqing/src/qe-7.2/PHonon/PH/phonon.f90:78
4 0x00413c71 main() ???:0
5 0x00022555 __libc_start_main() ???:0
6 0x0040cd8d _start() ???:0
=
[m005:65520] *** Process received signal ***
[m005:65520] Signal: Segmentation fault (11)
[m005:65520] Signal code: (-6)
[m005:65520] Failing at address: 0x6a8fff0
[m005:65520] [ 0] /lib64/libpthread.so.0(+0xf630)[0x2ac4edf09630]
[m005:65520] [ 1] /fs08/home/js_luqing/src/qe-7.2/bin/ph.x[0x4a2780]
[m005:65520] [ 2] /fs08/home/js_luqing/src/qe-7.2/bin/ph.x[0x4700e1]
[m005:65520] [ 3] /fs08/home/js_luqing/src/qe-7.2/bin/ph.x[0x41a811]
[m005:65520] [ 4] /fs08/home/js_luqing/src/qe-7.2/bin/ph.x[0x413d25]
[m005:65520] [ 5] /fs08/home/js_luqing/src/qe-7.2/bin/ph.x[0x413c71]
[m005:65520] [ 6] /lib64/libc.so.6(__libc_start_main+0xf5)[0x2ac4ee9e7555]
[m005:65520] [ 7] /fs08/home/js_luqing/src/qe-7.2/bin/ph.x[0x40cd8d]
[m005:65520] *** End of error message ***
--
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--
--
mpirun noticed that process rank 0 with PID 0 on node m005 exited on signal 11 
(Segmentation fault).

#


I am not an expert of coding, but it seems like the line 325 wasn't recognized, 
which is fairly strange. I don't know how to solve this problem, and I am glad 
if anyone can help me.


Yours,
Qing Lu


lq1998
1148330...@qq.com



___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Difference between data file saved inside outdir

2024-04-19 Thread Paolo Giannozzi
hdf5 files are in portable format, one per k-point, independent upon the 
number of processors and the kind of parallelization.


prefix.wfc* files are in nonportable binary format, one per processor, 
depend upon the number of processors and the kind of parallelization.


For reasons that is too long to explain, calculations that use SCF 
wavefunctions read then from "portable" format and re-write them to 
"non-portable" format. It's a little bit dumb and actually unnecessary 
in most cases.


Paolo

On 28/03/2024 08:04, Abdul Muhaymin via users wrote:

Hello all,

After a spin-polarized scf calculation, I have several files such as 
wfcup#.hdf5, wfcdw#hdf5 etc in in outdir/prefix.save. The number of the 
files are equal to the 2*number of k points. However, after nscf 
calculation, I am getting more data files in the outdir (not in 
outdir/prefix.save). I have N number of prefix.wfc* files where N is the 
number of processor I used. What is the difference between these two 
type of files? Are they both wavefunction? If I delete them, will there 
be any problem?


Sincerely,
Abdul Muhaymin,
Graduate student,
Bilkent University, Ankara.
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users


--
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 206, 33100 Udine Italy, +39-0432-558216
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] [SPAM] GPU for QE

2024-04-19 Thread Vor st via users

Dear users and developers, 
How can i add GPU in this input/slurm? 
Or i need plugin for QE?
 
Bes regards,
--
Vorobyev Stepan
ITMO university , Infochemistry science centre
 
 
 

TiO2101.slurm
Description: Binary data

  calculation = 'relax'
  etot_conv_thr =   6.00d-05
  forc_conv_thr =   1.00d-04
  outdir = './out/'
  prefix = 'aiida'
  pseudo_dir = './pseudo/'
  tprnfor = .true.
  tstress = .true.
  verbosity = 'high'
  tprnfor = .true.
  tstress = .true.
  verbosity = 'high'
  tefield   = .true.
  dipfield  = .true.
  wf_collect = .true.
  !max_seconds = 64800
/

  degauss =   0.02
  ecutrho =   5.00d+02
  ecutwfc =   5.00d+01
  ibrav = 0
  nat = 384
  !nosym = .true.
  !nspin = 2
  ntyp = 2
  occupations = 'smearing'
  smearing = 'gaussian'
  vdw_corr = 'DFT-D3'
  edir= 3
  emaxpos = 0.85
  eopreg  = 0.0304432958
  eamp= 0.0
/

  !conv_thr =   1.20d-09
  electron_maxstep = 300
  mixing_beta =   4.00d-01
/

/

/
ATOMIC_SPECIES
O  15.9994 O.pbesol-n-kjpaw_psl.0.1.UPF
Ti 47.867 ti_pbesol_v1.4.uspp.F.UPF
ATOMIC_POSITIONS crystal
O   0.0876560.9284370.000   0   0
O   0.0876560.4284370.000   0   0
O   0.3376560.9284370.000   0   0
O   0.3376560.4284370.000   0   0
O   0.5876560.9284370.000   0   0
O   0.5876560.4284370.000   0   0
O   0.8376560.9284370.000   0   0
O   0.8376560.4284370.000   0   0
O   0.9626560.1784380.000   0   0
O   0.9626560.6784380.000   0   0
O   0.2126560.1784380.000   0   0
O   0.2126560.6784380.000   0   0
O   0.4626560.1784380.000   0   0
O   0.4626560.6784380.000   0   0
O   0.7126560.1784380.000   0   0
O   0.7126560.6784380.000   0   0
Ti  0.0876560.3419790.0221450   0   0
Ti  0.0876560.8419790.0221450   0   0
Ti  0.3376560.3419790.0221450   0   0
Ti  0.3376560.8419790.0221450   0   0
Ti  0.5876560.3419790.0221450   0   0
Ti  0.5876560.8419790.0221450   0   0
Ti  0.8376560.3419790.0221450   0   0
Ti  0.8376560.8419790.0221450   0   0
Ti  0.9626560.0919790.0221450   0   0
Ti  0.9626560.5919790.0221450   0   0
Ti  0.2126560.0919790.0221450   0   0
Ti  0.2126560.5919790.0221450   0   0
Ti  0.4626560.0919790.0221450   0   0
Ti  0.4626560.5919790.0221450   0   0
Ti  0.7126560.0919790.0221450   0   0
Ti  0.7126560.5919790.0221450   0   0
O   0.0876560.0742710.0266800   0   0
O   0.0876560.5742710.0266800   0   0
O   0.3376560.0742710.0266800   0   0
O   0.3376560.5742710.0266800   0   0
O   0.5876560.0742710.0266800   0   0
O   0.5876560.5742710.0266800   0   0
O   0.8376560.0742710.0266800   0   0
O   0.8376560.5742710.0266800   0   0
O   0.9626560.3242710.0266800   0   0
O   0.9626560.8242710.0266800   0   0
O   0.2126560.3242710.0266800   0   0
O   0.2126560.8242710.0266800   0   0
O   0.4626560.3242710.0266800   0   0
O   0.4626560.8242710.0266800   0   0
O   0.7126560.3242710.0266800   0   0
O   0.7126560.8242710.0266800   0   0
O   0.9626560.0055210.0442890   0   0
O   0.9626560.5055210.0442890   0   0
O   0.2126560.0055210.0442890   0   0
O   0.2126560.5055210.0442890   0   0
O   

Re: [QE-users] lambda.x: at line 61 of file lambda.f90

2024-04-19 Thread Paolo Giannozzi

On 19/04/2024 10:21, Dawid Ciszewski wrote:


At line 61 of file lambda.f90 (unit = 5, file = 'stdin')
Fortran runtime error: End of file


look at line 61 of file lambda.f90: it reads the first line of the 
input. You are not reading the input file.


Paolo
--
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 206, 33100 Udine Italy, +39-0432-558216
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users


Re: [QE-users] lambda.x: at line 61 of file lambda.f90

2024-04-19 Thread Dawid Ciszewski
For completeness, I use QE version 7.2.

niedz., 14 kwi 2024 o 14:30 Dawid Ciszewski 
napisał(a):

> Hello QE users and developers,
>
> I tried to run lambda.x to calculate Tc for my graphene layer, but I got the 
> following error:
>
>
> Error termination. Backtrace:
> At line 61 of file lambda.f90 (unit = 5, file = 'stdin')
> Fortran runtime error: End of file
>
> Error termination. Backtrace:
> --
> Primary job  terminated normally, but 1 process returned
> a non-zero exit code. Per user-direction, the job has been aborted.
> --
> At line 61 of file lambda.f90 (unit = 5, file = 'stdin')
> Fortran runtime error: End of file
>
> Error termination. Backtrace:
> #0  0x7f6ba9a23860 in ???
> #1  0x7f6ba9a243b9 in ???
> #2  0x7f6ba9a2507f in ???
> #3  0x7f6ba9c5780b in ???
> #4  0x7f6ba9c50d64 in ???
> #5  0x7f6ba9c518f9 in ???
> #6  0x402615 in elph
> at /scientific/qe-7.2/PHonon/PH/lambda.f90:61
> #7  0x40246c in main
> at /scientific/qe-7.2/PHonon/PH/lambda.f90:187
> At line 61 of file lambda.f90 (unit = 5, file = 'stdin')
> Fortran runtime error: End of file
>
>
> This is the content of my lamdba.in file:
>
> 45  0.121  0
> 7
> 0.0   0.0   0.0 1
> 0.0   0.192450090   0.0 6
> 0.0   0.384900179   0.0 6
> 0.0  -0.577350269   0.0 3
> 0.16667   0.288675135   0.0 6
> 0.16667   0.481125224   0.0 12
> 0.3   0.577350269   0.0 2
> elph_dir/elph.inp_lambda.1
> elph_dir/elph.inp_lambda.2
> elph_dir/elph.inp_lambda.3
> elph_dir/elph.inp_lambda.4
> elph_dir/elph.inp_lambda.5
> elph_dir/elph.inp_lambda.6
> elph_dir/elph.inp_lambda.7
> 0.20
>
>
> I have elph.inp files in a separate folder - elph_dir. I would be very 
> grateful for any hints on how to resolve this problem. I searched for similar 
> subjects in the archive but none of the solutions worked in my case.
>
>
> With kind regards,
>
> Dawid Ciszewski
>
> Phd Student
>
> University of Warsaw, Center of New Technologies
>
> Email: d.ciszew...@cent.uw.edu.pl
>
>
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] QE 7.2 error required attribute rank not found

2024-04-19 Thread Paolo Giannozzi

Can't reproduce

Paolo

On 12/04/2024 09:43, Sol Loja via users wrote:

Dear QE Users Forum,

I am encountering the following error after running a nscf calculation 
on QE 7.2:


  %%
      task #         0
      from qes_read: matrixType : error #        10
      required attribute rank not found, can't read further, stopping
  %%


Please advise me on how to address this. Below is the input file:


ibrav=1,
celldm(1)=15,
nat=3,
ntyp=2,
ecutwfc=44.099,
ecutrho=440.99,
/

conv_thr=1.0d-6,
mixing_beta=0.7,
/

ion_dynamics = 'bfgs',
/
ATOMIC_SPECIES
C 12.011 C.pbe-n-kjpaw_psl.1.0.0.UPF
O 15.999 O.pbe-n-kjpaw_psl.1.0.0.UPF
ATOMIC_POSITIONS {angstrom}
C             7.50        7.50        7.50    0  
  0   0

O             8.6736893489        7.5000297668        7.5000297668
O             6.3262902178        7.4999803011        7.4999803011
K_POINTS {gamma}

Best regards,
Sol Loja

___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users


--
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 206, 33100 Udine Italy, +39-0432-558216
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] [SPAM] about relax-structure issue

2024-04-19 Thread Paolo Giannozzi

On 18/04/2024 15:14, 孙昊冉 wrote:


Dear Professors and Experts:


When I use pw.x to relax a cell structure. I meet a question that "= Bad 
termination of one of your application processes =rank 0 pid... running 
at ...  =exit status: 3." I wonder that what is the exactly meaning of 
exit status: 3.


it means that the ionic optimization has not converged (see the header 
of PW/src/run_pwscf.f90). You should look at the output for more information


Paolo


Many Thanks

The input file is list as follows:



   calculation='vc-relax'

   disk_io='low'

   prefix='***'

   restart_mode='from_scratch'

   verbosity='high'

   tprnfor=.true.

   tstress=.true.

   pseudo_dir = '~/qe-6.8/pseudo'

   forc_conv_thr=1.0d-5

   outdir='./tmp'

/



   ibrav= 0,

   nat= 40,

   ntyp= 2,

   occupations = 'smearing', smearing = 'gauss', degauss = 1.0d-2,

   ecutwfc = 160,

   ecutrho = 800,

/



   conv_thr = 1.0d-8

   mixing_beta = 0.7d0

   diagonalization = 'cg'

/



     ion_dynamics='bfgs'

/



     press_conv_thr=0.1

/

ATOMIC_SPECIES

  ***

CELL_PARAMETERS (angstrom)

   ***

ATOMIC_POSITIONS (crystal)

  ***

K_POINTS {automatic}

***



Dr Haoran

Xinjiang University, Urumqi

18/4/2024






___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users


--
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 206, 33100 Udine Italy, +39-0432-558216
___
The Quantum ESPRESSO community stands by the Ukrainian
people and expresses its concerns about the devastating
effects that the Russian military offensive has on their
country and on the free and peaceful scientific, cultural,
and economic cooperation amongst peoples
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users