Re: [QE-users] Vanderbilt Pseudopotential generation issue

2020-12-14 Thread Dr. K. C. Bhamu
Dear Dr. Andrii
I do not find   vdb2upf.x and uspp2upf.x  in upftools dir.
$QE/upftools]$ ls
how_to_fix_upf.md  README


Regards
Bhamu

On Mon, Dec 14, 2020 at 11:14 PM Andrii Shyichuk via users <
users@lists.quantum-espresso.org> wrote:

> Dear Dr Bhamu,
>
> You should have a upftools folder in your QE directory.
> There must be vdb2upf.x and uspp2upf.x in there, one of them should do.
>
> Best regards.
> Andrii Shyichuk, University of Wrocław
>
> W dniu 2020-12-14 15:12, Dr. K. C. Bhamu napisał:
>
> Dear QE users
> I wanted to use the Vanderbilt Pseudopotential for my Ni based slab as
> these are already used [1].
> I have downloaded [2] the tar file and followed the instructions to
> generate the PPs for Ni.
> I got the unformatted PP (binary format) for Ni in Pot dir.
> Now I need to use the reform.f program (from utility dir) to make them in
> a usable format.
> On top of this file, I see, I should define F77 but in the whole file, I
> could not find where to define the F77.
>
> Could someone please help me with this? How can I convert this binary
> format of the Ni PP to uspp?
>
> [1]. https://journals.aps.org/prb/pdf/10.1103/PhysRevB.101.195401
> 
> [2]. http://www.physics.rutgers.edu/~dhv/uspp/#DOWNLOAD
>
> I use ifort2020 and I usually use below for the QE compilation.
>
>  FOPTS= -O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
> -assume buffered_io -I$(MKLROOT)/include
>
>
>
> Thank you very much
> K.C. Bhamu
> University of Ulsan
> ROK
>
>
>
>
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list 
> users@lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>
>
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] qe6.6 is not writing wavefunctions in tmp dir

2020-12-14 Thread Dr. K. C. Bhamu
Dear QE Users

I am using PAW PPs from the QE webpage with QE-6.6. The code is compiled
with MKL-ifort-2020.

I noticed that it is not writing wavefunction in the tmp directory and
stuck after printing the Energy (grep ! *out). Then I need to delete the
job.

What could be the problem?



Thank you very much

K.C. Bhamu
University of Ulsan
ROK
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] time consuming band structure calculation for a supercell

2020-12-14 Thread Zahra Khatibi
Hello Lorenzo,

Thank you for the suggestions. I'm using QE 6.4 and 6.5 so the 'ppcg'
option is there among the options for diagonalization flag. I've used it as
you suggested and it workes really fast.
Also, I changed the pps from PAW to USPP out of curiosity and I can see
that it works twice faster. Any thoughts?

Wish you the best,
--
Z. Khatibi
Postdoctoral fellow
School of Physics
Trinity College Dublin


On Mon, Dec 14, 2020 at 2:11 PM Lorenzo Paulatto  wrote:

> p.s. If you can use a newer version of QE that does calculation="ppcg" I
> found it to be much (i.e. 6x) faster in this case
>
> cheers
>
> On 2020-12-14 14:50, Lorenzo Paulatto wrote:
> > Hello,
> >
> > I've had a look at the output, and a part for the cutoff which appears
> > a bit too high (you are probably safe with 50/400Ry of
> > ecutwfc/ecutrho) I only see to small problems:
> >
> > 1. the scf calculation is using 6 pools with 10 k-points, which means
> > that 4 pools have twice as much work to do as the others. In the ideal
> > case, the number of pools should be a divisor of the number of
> > k-points (i.e. 2, 5 or 10 in your case). Also, it is recommended that
> > the number of CPUs in a pool are a divisor of the number of CPUs on
> > each computing node, to avoid too much inter-node communication. In
> > your case, the best choice with 72 CPUs (on two nodes?) could be 2
> > pools. You may gain a bit of time, but this is not going to change a
> > lot. You should consider using more CPUs if you have the budget. For
> > example, 10 pools of 12 or 18 CPUs each.
> >
> > 2. The bands calculation runs on 12 CPUs and has a single k-point,
> > while each pool of the SCF one has up to 2 k-points. We would expect
> > that the bands calculation take about half as an scf step, i.e. about
> > 50 seconds. However, the bands calculation has some trouble
> > diagonalizing the Hamiltonian, you see it writes:
> >
> >  ethr =  2.76E-12,  avg # of iterations =120.0
> >
> > while typically the very last scf diagonalization is
> >
> >  ethr =  2.98E-12,  avg # of iterations =  3.3
> >
> > This is because, the scf calculation can start with a very good guess
> > good the wavefunction, while the bands calculation does not. It is
> > still faster than doing the entire scf procedure, but just by a factor
> > ~2.3
> >
> > Fortunately, you do not usually need the eigenvalues to a precision of
> > 10e-12. You can set the threshold by hand using the keyword
> > diago_thr_init, I guess 1.d-6 should be tight enough. However, double
> > check what you get in output, because I am half-suspecting that it may
> > be over-written by the value in the restart file
> >
> > cheers
> >
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] Plasma Frequency at Epsilon.x and simple.x codes

2020-12-14 Thread Anibal Thiago Bezerra
Dear Quantum Espresso developers and user,

Currently I'm using both epsilon.x and simple.x to analyse optical
properties of metallic alloys. The output of epsilon.x returns the plama
frequency while the simple_ip,x returns the Drude's plasma frequencies.
Sorry if I'm missing a basic concept, but what is the difference between
them, if any?

I'm getting different values for the same structure.

Thanks in advance

Anibal Bezerra
The Federal University of Alfenas
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Vanderbilt Pseudopotential generation issue

2020-12-14 Thread Andrii Shyichuk via users
 

Dear Dr Bhamu, 

You should have a upftools folder in your QE directory. 
There must be vdb2upf.x and uspp2upf.x in there, one of them should do. 

Best regards.
Andrii Shyichuk, University of Wrocław

W dniu 2020-12-14 15:12, Dr. K. C. Bhamu napisał: 

> Dear QE users 
> I wanted to use the Vanderbilt Pseudopotential for my Ni based slab as these 
> are already used [1]. 
> I have downloaded [2] the tar file and followed the instructions to generate 
> the PPs for Ni. 
> I got the unformatted PP (binary format) for Ni in Pot dir. 
> Now I need to use the reform.f program (from utility dir) to make them in a 
> usable format. 
> On top of this file, I see, I should define F77 but in the whole file, I 
> could not find where to define the F77. 
> 
> Could someone please help me with this? How can I convert this binary format 
> of the Ni PP to uspp? 
> 
> [1]. https://journals.aps.org/prb/pdf/10.1103/PhysRevB.101.195401 [2] 
> [2]. http://www.physics.rutgers.edu/~dhv/uspp/#DOWNLOAD 
> 
> I use ifort2020 and I usually use below for the QE compilation. 
> 
> FOPTS= -O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume 
> buffered_io -I$(MKLROOT)/include 
> 
> Thank you very much 
> K.C. Bhamu 
> University of Ulsan 
> ROK 
> 
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu [1])
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users

  

Links:
--
[1] http://www.max-centre.eu
[2] https://journals.aps.org/prb/pdf/10.1103/PhysRevB.101.195401
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] time consuming band structure calculation for a supercell

2020-12-14 Thread Lorenzo Paulatto


Speaking of ppcg, is there any published (or otherwise public) 
benchmark of ppcg vs Davidson and/or cg? For which cases can ppcg be 
expected to be faster?


I don't know, I'm not an author of that part of code. I've just tested 
it out of curiosity in this particular case (i.e. very slow diagonalization)





Best regards,
Michal Krompiec, Merck KGaA

On Mon, Dec 14, 2020 at 2:12 PM Lorenzo Paulatto > wrote:


p.p.s I mean diagonalization='ppcg'

On 2020-12-14 15:10, Lorenzo Paulatto wrote:
> p.s. If you can use a newer version of QE that does
calculation="ppcg"
> I found it to be much (i.e. 6x) faster in this case
>
> cheers
>
> On 2020-12-14 14:50, Lorenzo Paulatto wrote:
>> Hello,
>>
>> I've had a look at the output, and a part for the cutoff which
>> appears a bit too high (you are probably safe with 50/400Ry of
>> ecutwfc/ecutrho) I only see to small problems:
>>
>> 1. the scf calculation is using 6 pools with 10 k-points, which
means
>> that 4 pools have twice as much work to do as the others. In the
>> ideal case, the number of pools should be a divisor of the
number of
>> k-points (i.e. 2, 5 or 10 in your case). Also, it is
recommended that
>> the number of CPUs in a pool are a divisor of the number of
CPUs on
>> each computing node, to avoid too much inter-node
communication. In
>> your case, the best choice with 72 CPUs (on two nodes?) could be 2
>> pools. You may gain a bit of time, but this is not going to
change a
>> lot. You should consider using more CPUs if you have the
budget. For
>> example, 10 pools of 12 or 18 CPUs each.
>>
>> 2. The bands calculation runs on 12 CPUs and has a single k-point,
>> while each pool of the SCF one has up to 2 k-points. We would
expect
>> that the bands calculation take about half as an scf step, i.e.
about
>> 50 seconds. However, the bands calculation has some trouble
>> diagonalizing the Hamiltonian, you see it writes:
>>
>>  ethr =  2.76E-12,  avg # of iterations =120.0
>>
>> while typically the very last scf diagonalization is
>>
>>  ethr =  2.98E-12,  avg # of iterations =  3.3
>>
>> This is because, the scf calculation can start with a very good
guess
>> good the wavefunction, while the bands calculation does not. It is
>> still faster than doing the entire scf procedure, but just by a
>> factor ~2.3
>>
>> Fortunately, you do not usually need the eigenvalues to a
precision
>> of 10e-12. You can set the threshold by hand using the keyword
>> diago_thr_init, I guess 1.d-6 should be tight enough. However,
double
>> check what you get in output, because I am half-suspecting that it
>> may be over-written by the value in the restart file
>>
>> cheers
>>
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu
)
users mailing list users@lists.quantum-espresso.org

https://lists.quantum-espresso.org/mailman/listinfo/users


___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] time consuming band structure calculation for a supercell

2020-12-14 Thread Michal Krompiec
Dear Lorenzo,
Speaking of ppcg, is there any published (or otherwise public) benchmark of
ppcg vs Davidson and/or cg? For which cases can ppcg be expected to be
faster?
Best regards,
Michal Krompiec, Merck KGaA

On Mon, Dec 14, 2020 at 2:12 PM Lorenzo Paulatto  wrote:

> p.p.s I mean diagonalization='ppcg'
>
> On 2020-12-14 15:10, Lorenzo Paulatto wrote:
> > p.s. If you can use a newer version of QE that does calculation="ppcg"
> > I found it to be much (i.e. 6x) faster in this case
> >
> > cheers
> >
> > On 2020-12-14 14:50, Lorenzo Paulatto wrote:
> >> Hello,
> >>
> >> I've had a look at the output, and a part for the cutoff which
> >> appears a bit too high (you are probably safe with 50/400Ry of
> >> ecutwfc/ecutrho) I only see to small problems:
> >>
> >> 1. the scf calculation is using 6 pools with 10 k-points, which means
> >> that 4 pools have twice as much work to do as the others. In the
> >> ideal case, the number of pools should be a divisor of the number of
> >> k-points (i.e. 2, 5 or 10 in your case). Also, it is recommended that
> >> the number of CPUs in a pool are a divisor of the number of CPUs on
> >> each computing node, to avoid too much inter-node communication. In
> >> your case, the best choice with 72 CPUs (on two nodes?) could be 2
> >> pools. You may gain a bit of time, but this is not going to change a
> >> lot. You should consider using more CPUs if you have the budget. For
> >> example, 10 pools of 12 or 18 CPUs each.
> >>
> >> 2. The bands calculation runs on 12 CPUs and has a single k-point,
> >> while each pool of the SCF one has up to 2 k-points. We would expect
> >> that the bands calculation take about half as an scf step, i.e. about
> >> 50 seconds. However, the bands calculation has some trouble
> >> diagonalizing the Hamiltonian, you see it writes:
> >>
> >>  ethr =  2.76E-12,  avg # of iterations =120.0
> >>
> >> while typically the very last scf diagonalization is
> >>
> >>  ethr =  2.98E-12,  avg # of iterations =  3.3
> >>
> >> This is because, the scf calculation can start with a very good guess
> >> good the wavefunction, while the bands calculation does not. It is
> >> still faster than doing the entire scf procedure, but just by a
> >> factor ~2.3
> >>
> >> Fortunately, you do not usually need the eigenvalues to a precision
> >> of 10e-12. You can set the threshold by hand using the keyword
> >> diago_thr_init, I guess 1.d-6 should be tight enough. However, double
> >> check what you get in output, because I am half-suspecting that it
> >> may be over-written by the value in the restart file
> >>
> >> cheers
> >>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] time consuming band structure calculation for a supercell

2020-12-14 Thread Lorenzo Paulatto

p.p.s I mean diagonalization='ppcg'

On 2020-12-14 15:10, Lorenzo Paulatto wrote:
p.s. If you can use a newer version of QE that does calculation="ppcg" 
I found it to be much (i.e. 6x) faster in this case


cheers

On 2020-12-14 14:50, Lorenzo Paulatto wrote:

Hello,

I've had a look at the output, and a part for the cutoff which 
appears a bit too high (you are probably safe with 50/400Ry of 
ecutwfc/ecutrho) I only see to small problems:


1. the scf calculation is using 6 pools with 10 k-points, which means 
that 4 pools have twice as much work to do as the others. In the 
ideal case, the number of pools should be a divisor of the number of 
k-points (i.e. 2, 5 or 10 in your case). Also, it is recommended that 
the number of CPUs in a pool are a divisor of the number of CPUs on 
each computing node, to avoid too much inter-node communication. In 
your case, the best choice with 72 CPUs (on two nodes?) could be 2 
pools. You may gain a bit of time, but this is not going to change a 
lot. You should consider using more CPUs if you have the budget. For 
example, 10 pools of 12 or 18 CPUs each.


2. The bands calculation runs on 12 CPUs and has a single k-point, 
while each pool of the SCF one has up to 2 k-points. We would expect 
that the bands calculation take about half as an scf step, i.e. about 
50 seconds. However, the bands calculation has some trouble 
diagonalizing the Hamiltonian, you see it writes:


 ethr =  2.76E-12,  avg # of iterations =120.0

while typically the very last scf diagonalization is

 ethr =  2.98E-12,  avg # of iterations =  3.3

This is because, the scf calculation can start with a very good guess 
good the wavefunction, while the bands calculation does not. It is 
still faster than doing the entire scf procedure, but just by a 
factor ~2.3


Fortunately, you do not usually need the eigenvalues to a precision 
of 10e-12. You can set the threshold by hand using the keyword 
diago_thr_init, I guess 1.d-6 should be tight enough. However, double 
check what you get in output, because I am half-suspecting that it 
may be over-written by the value in the restart file


cheers


___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] Vanderbilt Pseudopotential generation issue

2020-12-14 Thread Dr. K. C. Bhamu
Dear QE users
I wanted to use the Vanderbilt Pseudopotential for my Ni based slab as
these are already used [1].
I have downloaded [2] the tar file and followed the instructions to
generate the PPs for Ni.
I got the unformatted PP (binary format) for Ni in Pot dir.
Now I need to use the reform.f program (from utility dir) to make them in a
usable format.
On top of this file, I see, I should define F77 but in the whole file, I
could not find where to define the F77.

Could someone please help me with this? How can I convert this binary
format of the Ni PP to uspp?

[1]. https://journals.aps.org/prb/pdf/10.1103/PhysRevB.101.195401

[2]. http://www.physics.rutgers.edu/~dhv/uspp/#DOWNLOAD

I use ifort2020 and I usually use below for the QE compilation.

 FOPTS= -O -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback
-assume buffered_io -I$(MKLROOT)/include



Thank you very much
K.C. Bhamu
University of Ulsan
ROK
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] time consuming band structure calculation for a supercell

2020-12-14 Thread Lorenzo Paulatto
p.s. If you can use a newer version of QE that does calculation="ppcg" I 
found it to be much (i.e. 6x) faster in this case


cheers

On 2020-12-14 14:50, Lorenzo Paulatto wrote:

Hello,

I've had a look at the output, and a part for the cutoff which appears 
a bit too high (you are probably safe with 50/400Ry of 
ecutwfc/ecutrho) I only see to small problems:


1. the scf calculation is using 6 pools with 10 k-points, which means 
that 4 pools have twice as much work to do as the others. In the ideal 
case, the number of pools should be a divisor of the number of 
k-points (i.e. 2, 5 or 10 in your case). Also, it is recommended that 
the number of CPUs in a pool are a divisor of the number of CPUs on 
each computing node, to avoid too much inter-node communication. In 
your case, the best choice with 72 CPUs (on two nodes?) could be 2 
pools. You may gain a bit of time, but this is not going to change a 
lot. You should consider using more CPUs if you have the budget. For 
example, 10 pools of 12 or 18 CPUs each.


2. The bands calculation runs on 12 CPUs and has a single k-point, 
while each pool of the SCF one has up to 2 k-points. We would expect 
that the bands calculation take about half as an scf step, i.e. about 
50 seconds. However, the bands calculation has some trouble 
diagonalizing the Hamiltonian, you see it writes:


 ethr =  2.76E-12,  avg # of iterations =120.0

while typically the very last scf diagonalization is

 ethr =  2.98E-12,  avg # of iterations =  3.3

This is because, the scf calculation can start with a very good guess 
good the wavefunction, while the bands calculation does not. It is 
still faster than doing the entire scf procedure, but just by a factor 
~2.3


Fortunately, you do not usually need the eigenvalues to a precision of 
10e-12. You can set the threshold by hand using the keyword 
diago_thr_init, I guess 1.d-6 should be tight enough. However, double 
check what you get in output, because I am half-suspecting that it may 
be over-written by the value in the restart file


cheers


___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] time consuming band structure calculation for a supercell

2020-12-14 Thread Lorenzo Paulatto

Hello,

I've had a look at the output, and a part for the cutoff which appears a 
bit too high (you are probably safe with 50/400Ry of ecutwfc/ecutrho) I 
only see to small problems:


1. the scf calculation is using 6 pools with 10 k-points, which means 
that 4 pools have twice as much work to do as the others. In the ideal 
case, the number of pools should be a divisor of the number of k-points 
(i.e. 2, 5 or 10 in your case). Also, it is recommended that the number 
of CPUs in a pool are a divisor of the number of CPUs on each computing 
node, to avoid too much inter-node communication. In your case, the best 
choice with 72 CPUs (on two nodes?) could be 2 pools. You may gain a bit 
of time, but this is not going to change a lot. You should consider 
using more CPUs if you have the budget. For example, 10 pools of 12 or 
18 CPUs each.


2. The bands calculation runs on 12 CPUs and has a single k-point, while 
each pool of the SCF one has up to 2 k-points. We would expect that the 
bands calculation take about half as an scf step, i.e. about 50 seconds. 
However, the bands calculation has some trouble diagonalizing the 
Hamiltonian, you see it writes:


 ethr =  2.76E-12,  avg # of iterations =120.0

while typically the very last scf diagonalization is

 ethr =  2.98E-12,  avg # of iterations =  3.3

This is because, the scf calculation can start with a very good guess 
good the wavefunction, while the bands calculation does not. It is still 
faster than doing the entire scf procedure, but just by a factor ~2.3


Fortunately, you do not usually need the eigenvalues to a precision of 
10e-12. You can set the threshold by hand using the keyword 
diago_thr_init, I guess 1.d-6 should be tight enough. However, double 
check what you get in output, because I am half-suspecting that it may 
be over-written by the value in the restart file


cheers

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] time consuming band structure calculation for a supercell

2020-12-14 Thread Zahra Khatibi
Hi,

This is my first attempt on such systems and I used similar pw.x input to
these papers:
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.101.085112
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.93.155104

I used PAW pps and 60 Ry wfc, when I first started calculating bands. Then
I increased wfc cutoff to 100 Ry but still, my calculations were very slow.

Kind regards,
Zahra



On Mon, Dec 14, 2020 at 10:18 AM Tobias Klöffel 
wrote:

> Hello Zahra,
>
> why do you use PAW and 100Ry wfc cutoff?
>
> Kind regards,
>
> On 12/14/20 11:13 AM, Zahra Khatibi wrote:
>
> Hello,
>
> Sure. I've shared the input and output in the following link:
>
> https://drive.google.com/drive/folders/1trdcWUw7GKSw0zLQouxygpaKwOl7_2KM?usp=sharing
>
> Kind regards,
>
> On Sat, Dec 12, 2020 at 5:01 PM Lorenzo Paulatto 
> wrote:
>
>>
>> Aslo I have tried running the band calculation on different systems
>> (local pc with 12 nodes) and HPC (with 36 and 72 nodes). Every time I have
>> the same problem. I have tried QE 6.5 and 6.4 for this calculation all with
>> same issue.
>>
>>
>> For comparison, I have here a calculation with 119 electrons, 10
>> k-points, 100 Ry kinetic energy cutoff. One SCF iteration takes about 5
>> seconds on 32 CPUs (2 nodes of a very old computing cluster that has since
>> been retired). From 120 to 190 electrons there should be around a factor 4
>> of CPU times. But it would be easier to say which is the source of the
>> discrepancy if you sent your input and output files to teh list, to have a
>> look
>>
>>
>> cheers
>>
>>
>>
>> All the best,
>> Zahra
>>
>>
>>
>>
>> On Fri, Dec 11, 2020, 22:22 Lorenzo Paulatto  wrote:
>>
>>> Hello Zahra,
>>>
>>> if I understand correctly, you manage to do the scf calculation, but
>>> then the band calculation is very slow. The cost per k-point of nscf should
>>> be more or less the same as the cost per k-point of one scf iteration. If
>>> it is not, there is something wrong. One possible problem, is that ecutwfc
>>> is interpreted differently during nscf. A tight value (1.d-12 or less) may
>>> cause the threshold of diagonalization in nscf to become too small and very
>>> slow to converge. This should be fixed in v 6.7, but you can just increase
>>> ecutwfc in nscf if you're using a previous version.
>>>
>>> If not, it may be a problem with parallelism, i.e. running on too many
>>> CPUs or some proper human error like running with all the processes on the
>>> same computing node.
>>>
>>>
>>> cheers
>>> On 2020-12-11 19:25, Zahra Khatibi wrote:
>>>
>>> Dear all,
>>>
>>> First of all, I hope everyone is safe and well in these crazy times.
>>> I'm calculating the electronic band dispersion of a 2D heterostructure
>>> with a 59 atom unit cell. This system is a small bandgap (10-20 meV)
>>> semiconductor. The number of valence bands is (valence electrons/2) 181.
>>> When I set 'nbnd' to 190, the band structure calculation costs me 30
>>> minutes for each k point on HPC with 72 processors. This means that if I do
>>> a simple band calculation for a high symmetry path with 100 points within,
>>> I have to wait almost 50 hours! This even becomes worst when I try to
>>> evaluate the band dispersion with SOC switched on (twice the spin
>>> degenerate band calculation).
>>> Since the band dispersion evaluation is the major part of our study, I
>>> was wondering if there is a way around this problem, like reducing the
>>> number of bands by only looking at energy interval close to Fermi energy?
>>> I could see that there are lots of papers and studies in the literature
>>> with huge unit cells and heavy atoms that have presented numerous band
>>> structures (using QE). So I really appreciate it if you could help me here.
>>>
>>> Kind regards,
>>> --
>>> Z. Khatibi
>>> School of Physics
>>> Trinity College Dublin
>>>
>>> ___
>>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>>> users mailing list 
>>> users@lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>>>
>>> ___
>>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>>> users mailing list users@lists.quantum-espresso.org
>>> https://lists.quantum-espresso.org/mailman/listinfo/users
>>
>>
>> ___
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>> users mailing list 
>> users@lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>>
>> ___
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>> users mailing list users@lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list 
> users@lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>
>
> --
> M.Sc. 

Re: [QE-users] time consuming band structure calculation for a supercell

2020-12-14 Thread Tobias Klöffel

Hello Zahra,

why do you use PAW and 100Ry wfc cutoff?

Kind regards,

On 12/14/20 11:13 AM, Zahra Khatibi wrote:

Hello,

Sure. I've shared the input and output in the following link:
https://drive.google.com/drive/folders/1trdcWUw7GKSw0zLQouxygpaKwOl7_2KM?usp=sharing 



Kind regards,

On Sat, Dec 12, 2020 at 5:01 PM Lorenzo Paulatto > wrote:




Aslo I have tried running the band calculation on different
systems (local pc with 12 nodes) and HPC (with 36 and 72 nodes).
Every time I have the same problem. I have tried QE 6.5 and 6.4
for this calculation all with same issue.



For comparison, I have here a calculation with 119 electrons, 10
k-points, 100 Ry kinetic energy cutoff. One SCF iteration takes
about 5 seconds on 32 CPUs (2 nodes of a very old computing
cluster that has since been retired). From 120 to 190 electrons
there should be around a factor 4 of CPU times. But it would be
easier to say which is the source of the discrepancy if you sent
your input and output files to teh list, to have a look


cheers




All the best,
Zahra




On Fri, Dec 11, 2020, 22:22 Lorenzo Paulatto mailto:paul...@gmail.com>> wrote:

Hello Zahra,

if I understand correctly, you manage to do the scf
calculation, but then the band calculation is very slow. The
cost per k-point of nscf should be more or less the same as
the cost per k-point of one scf iteration. If it is not,
there is something wrong. One possible problem, is that
ecutwfc is interpreted differently during nscf. A tight value
(1.d-12 or less) may cause the threshold of diagonalization
in nscf to become too small and very slow to converge. This
should be fixed in v 6.7, but you can just increase ecutwfc
in nscf if you're using a previous version.

If not, it may be a problem with parallelism, i.e. running on
too many CPUs or some proper human error like running with
all the processes on the same computing node.


cheers

On 2020-12-11 19:25, Zahra Khatibi wrote:

Dear all,

First of all, I hope everyone is safe and well in these
crazy times.
I'm calculating the electronic band dispersion of a 2D
heterostructure with a 59 atom unit cell. This system is a
small bandgap (10-20 meV) semiconductor. The number of
valence bands is (valence electrons/2) 181. When I set
'nbnd' to 190, the band structure calculation costs me 30
minutes for each k point on HPC with 72 processors. This
means that if I do a simple band calculation for a high
symmetry path with 100 points within, I have to wait almost
50 hours! This even becomes worst when I try to evaluate the
band dispersion with SOC switched on (twice the spin
degenerate band calculation).
Since the band dispersion evaluation is the major part of
our study, I was wondering if there is a way around this
problem, like reducing the number of bands by only looking
at energy interval close to Fermi energy?
I could see that there are lots of papers and studies in the
literature with huge unit cells and heavy atoms that have
presented numerous band structures (using QE). So I really
appreciate it if you could help me here.

Kind regards,
--
Z. Khatibi
School of Physics
Trinity College Dublin

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu  
)
users mailing listus...@lists.quantum-espresso.org  

https://lists.quantum-espresso.org/mailman/listinfo/users  


___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu
)
users mailing list users@lists.quantum-espresso.org

https://lists.quantum-espresso.org/mailman/listinfo/users



___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu  
)
users mailing listus...@lists.quantum-espresso.org  

https://lists.quantum-espresso.org/mailman/listinfo/users  


___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu
)
users mailing list users@lists.quantum-espresso.org

Re: [QE-users] time consuming band structure calculation for a supercell

2020-12-14 Thread Zahra Khatibi
Hello,

Sure. I've shared the input and output in the following link:
https://drive.google.com/drive/folders/1trdcWUw7GKSw0zLQouxygpaKwOl7_2KM?usp=sharing

Kind regards,

On Sat, Dec 12, 2020 at 5:01 PM Lorenzo Paulatto  wrote:

>
> Aslo I have tried running the band calculation on different systems (local
> pc with 12 nodes) and HPC (with 36 and 72 nodes). Every time I have the
> same problem. I have tried QE 6.5 and 6.4 for this calculation all with
> same issue.
>
>
> For comparison, I have here a calculation with 119 electrons, 10 k-points,
> 100 Ry kinetic energy cutoff. One SCF iteration takes about 5 seconds on 32
> CPUs (2 nodes of a very old computing cluster that has since been retired).
> From 120 to 190 electrons there should be around a factor 4 of CPU times.
> But it would be easier to say which is the source of the discrepancy if you
> sent your input and output files to teh list, to have a look
>
>
> cheers
>
>
>
> All the best,
> Zahra
>
>
>
>
> On Fri, Dec 11, 2020, 22:22 Lorenzo Paulatto  wrote:
>
>> Hello Zahra,
>>
>> if I understand correctly, you manage to do the scf calculation, but then
>> the band calculation is very slow. The cost per k-point of nscf should be
>> more or less the same as the cost per k-point of one scf iteration. If it
>> is not, there is something wrong. One possible problem, is that ecutwfc is
>> interpreted differently during nscf. A tight value (1.d-12 or less) may
>> cause the threshold of diagonalization in nscf to become too small and very
>> slow to converge. This should be fixed in v 6.7, but you can just increase
>> ecutwfc in nscf if you're using a previous version.
>>
>> If not, it may be a problem with parallelism, i.e. running on too many
>> CPUs or some proper human error like running with all the processes on the
>> same computing node.
>>
>>
>> cheers
>> On 2020-12-11 19:25, Zahra Khatibi wrote:
>>
>> Dear all,
>>
>> First of all, I hope everyone is safe and well in these crazy times.
>> I'm calculating the electronic band dispersion of a 2D heterostructure
>> with a 59 atom unit cell. This system is a small bandgap (10-20 meV)
>> semiconductor. The number of valence bands is (valence electrons/2) 181.
>> When I set 'nbnd' to 190, the band structure calculation costs me 30
>> minutes for each k point on HPC with 72 processors. This means that if I do
>> a simple band calculation for a high symmetry path with 100 points within,
>> I have to wait almost 50 hours! This even becomes worst when I try to
>> evaluate the band dispersion with SOC switched on (twice the spin
>> degenerate band calculation).
>> Since the band dispersion evaluation is the major part of our study, I
>> was wondering if there is a way around this problem, like reducing the
>> number of bands by only looking at energy interval close to Fermi energy?
>> I could see that there are lots of papers and studies in the literature
>> with huge unit cells and heavy atoms that have presented numerous band
>> structures (using QE). So I really appreciate it if you could help me here.
>>
>> Kind regards,
>> --
>> Z. Khatibi
>> School of Physics
>> Trinity College Dublin
>>
>> ___
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>> users mailing list 
>> users@lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>>
>> ___
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
>> users mailing list users@lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list 
> users@lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users