There are a few tests here:
https://gitlab.com/max-centre/benchmarks/-/tree/master/Quantum_Espresso
but there is much ongoing work on diagonalization algorithms.
Paolo
On Mon, Dec 14, 2020 at 4:08 PM Michal Krompiec
wrote:
> Dear Lorenzo,
> Speaking of ppcg, is there any published (or
Hello Lorenzo,
Thank you for the suggestions. I'm using QE 6.4 and 6.5 so the 'ppcg'
option is there among the options for diagonalization flag. I've used it as
you suggested and it workes really fast.
Also, I changed the pps from PAW to USPP out of curiosity and I can see
that it works twice
Speaking of ppcg, is there any published (or otherwise public)
benchmark of ppcg vs Davidson and/or cg? For which cases can ppcg be
expected to be faster?
I don't know, I'm not an author of that part of code. I've just tested
it out of curiosity in this particular case (i.e. very slow
Dear Lorenzo,
Speaking of ppcg, is there any published (or otherwise public) benchmark of
ppcg vs Davidson and/or cg? For which cases can ppcg be expected to be
faster?
Best regards,
Michal Krompiec, Merck KGaA
On Mon, Dec 14, 2020 at 2:12 PM Lorenzo Paulatto wrote:
> p.p.s I mean
p.p.s I mean diagonalization='ppcg'
On 2020-12-14 15:10, Lorenzo Paulatto wrote:
p.s. If you can use a newer version of QE that does calculation="ppcg"
I found it to be much (i.e. 6x) faster in this case
cheers
On 2020-12-14 14:50, Lorenzo Paulatto wrote:
Hello,
I've had a look at the
p.s. If you can use a newer version of QE that does calculation="ppcg" I
found it to be much (i.e. 6x) faster in this case
cheers
On 2020-12-14 14:50, Lorenzo Paulatto wrote:
Hello,
I've had a look at the output, and a part for the cutoff which appears
a bit too high (you are probably safe
Hello,
I've had a look at the output, and a part for the cutoff which appears a
bit too high (you are probably safe with 50/400Ry of ecutwfc/ecutrho) I
only see to small problems:
1. the scf calculation is using 6 pools with 10 k-points, which means
that 4 pools have twice as much work to
Hi,
This is my first attempt on such systems and I used similar pw.x input to
these papers:
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.101.085112
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.93.155104
I used PAW pps and 60 Ry wfc, when I first started calculating bands.
Hello Zahra,
why do you use PAW and 100Ry wfc cutoff?
Kind regards,
On 12/14/20 11:13 AM, Zahra Khatibi wrote:
Hello,
Sure. I've shared the input and output in the following link:
https://drive.google.com/drive/folders/1trdcWUw7GKSw0zLQouxygpaKwOl7_2KM?usp=sharing
Hello,
Sure. I've shared the input and output in the following link:
https://drive.google.com/drive/folders/1trdcWUw7GKSw0zLQouxygpaKwOl7_2KM?usp=sharing
Kind regards,
On Sat, Dec 12, 2020 at 5:01 PM Lorenzo Paulatto wrote:
>
> Aslo I have tried running the band calculation on different
Aslo I have tried running the band calculation on different systems
(local pc with 12 nodes) and HPC (with 36 and 72 nodes). Every time I
have the same problem. I have tried QE 6.5 and 6.4 for this
calculation all with same issue.
For comparison, I have here a calculation with 119
Hello Lorenzo,
It's really nice to hear from you. I hope you're doing well.
So I am using QE 6.4 and unfortunately I can see that the scf calculation
takes same amount of time as the band calculation. So I doubt that the
problem might be the inconsistency between these two calculations. But
Hello Zahra,
if I understand correctly, you manage to do the scf calculation, but
then the band calculation is very slow. The cost per k-point of nscf
should be more or less the same as the cost per k-point of one scf
iteration. If it is not, there is something wrong. One possible problem,
Dear all,
First of all, I hope everyone is safe and well in these crazy times.
I'm calculating the electronic band dispersion of a 2D heterostructure with
a 59 atom unit cell. This system is a small bandgap (10-20 meV)
semiconductor. The number of valence bands is (valence electrons/2) 181.
When
14 matches
Mail list logo