[QE-users] qe 6.5 seemingly cannot use libxc 5.0.0

2020-05-27 Thread José C . Conesa

Hi,

I tried using libxc v. 5.0.0 with qe-6.5 and was unable to do it. 
Seemingly libxc 5.00 no longer contains the library libxcf03, while 
funct.f90 requires the associated module xc_f03_lib_m.mod. This should 
be solved in a new version of qe.


Best regards,

--
José C. Conesa
Instituto de Catálisis y Petroleoquímica, CSIC
Marie Curie 2, Cantoblanco
28049 Madrid, Spain
Tel. (+34)915854766

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] efficient parallelization on a system without Infiniband

2020-05-27 Thread Michal Krompiec
Dear Ye, Dear Paolo,
I re-ran the benchmarks for my test case: a single MD step of a smallish
supercell of a certain oxide semiconductor, with PBE and PAW (from PSlib).
Previous timings were from the start of MD run until the end of the 1st SCF
iteration of the 2nd MD step.

Interestingly, ELPA gave no advantage over ScaLAPACK, and
diago_david_ndim=2 made things significantly slower.
The ScaLAPACK build is QE 6.5, the ELPA build is the development version
from last month. Both compiled with Intel 2020 and Intel MPI.

Here are the numbers:

MPI per node npool nodes ELPA/Scalapack diago_david_ndim time / s speedup
vs 1 node
56 4 1 ELPA 4 1335
56 4 1 ELPA 2 1931
56 4 1 ScaLAPACK 4 976
56 4 1 ScaLAPACK 2 1486
56 4 4 ELPA 4 367 3.637602
56 4 4 ELPA 2 729 2.648834
56 4 4 ScaLAPACK 4 357 2.733894
56 4 4 ScaLAPACK 2 555 2.677477
Best,
Michal


On Wed, 27 May 2020 at 15:47, Ye Luo  wrote:

> 3.26x seems possible to me. It can be caused by load imbalance in the
> iterative solver among the 4 k-points.
> Could you list the time in seconds with 1 node and 4 nodes? Those you used
> to calculate 3.26x.
> Could you also try diago_david_ndim=2 under "" and provide 1 and
> 4-node time in seconds?
>
> In addition, you may try ELPA which usually gives better performance than
> scalapack.
>
> Thanks,
> Ye
> ===
> Ye Luo, Ph.D.
> Computational Science Division & Leadership Computing Facility
> Argonne National Laboratory
>
>
> On Wed, May 27, 2020 at 9:27 AM Michal Krompiec 
> wrote:
>
>> Hello,
>> How can I minimize inter-node MPI communication in a pw.x run? My
>> system doesn't have Infiniband and inter-node MPI can easily become
>> the bottleneck.
>> Let's say, I'm running a calculation with 4 k-points, on 4 nodes, with
>> 56 MPI tasks per node. I would then use -npool 4 to create 4 pools for
>> the k-point parallelization. However, it seems that the
>> diagonalization is by default parallelized imperfectly (or isn't it?):
>>  Subspace diagonalization in iterative solution of the eigenvalue
>> problem:
>>  one sub-group per band group will be used
>>  scalapack distributed-memory algorithm (size of sub-group:  7*  7
>> procs)
>> So far, speedup on 4 nodes vs 1 node is 3.26x. Is it normal or does it
>> look like it can be improved?
>>
>> Best regards,
>>
>> Michal Krompiec
>> Merck KGaA
>> Southampton, UK
>> ___
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
>> users mailing list users@lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] [QE-user] QE on AIIDA labs

2020-05-27 Thread Yuvam Bhateja
Thanks a lot for the help sir

On Wed, 27 May 2020, 11:13 pm Ercole Loris,  wrote:

> Hello Yuvam,
>
>
> I suggest you to have a look at the documentation of
> the aiida-quantumespresso plugin:
>
> https://aiida-quantumespresso.readthedocs.io/en/latest/
>
> it will give you
> some examples of QE scripts run with AiiDA.
>
>
> Also you can have a look at some tutorials here:
>
> https://aiida-tutorials.readthedocs.io/en/latest/
>
>
> If you have more specific questions, you can write to the AiiDA-users
> mailing list:
>
> http://www.aiida.net/mailing-list/
>
> aiidaus...@googlegroups.com
>
>
> Best,
>
> -Loris
>
>
>
> ---
> Loris Ercole
> Postdoctoral Researcher
> Theory and Simulation of Materials – École Polytechnique Fédérale de
> Lausanne
> | Address:  EPFL STI IMX THEOS, Station 9 – CH-1015 Lausanne (Switzerland)
> | Office:  ME D2 1022 | Phone:  +41 21 693 1099 |
> https://people.epfl.ch/loris.ercole
>
> --
> *From:* users  on behalf of
> Yuvam Bhateja 
> *Sent:* Wednesday, May 27, 2020 5:52:01 PM
> *To:* users@lists.quantum-espresso.org
> *Subject:* [QE-users] [QE-user] QE on AIIDA labs
>
> Hey,
> I was trying to run QE on AIIDA using my QE input file .in but the process
> doesn't carry on.
> What kind of input does QE on aiida takes?
> And while using AIIDA labs (on cloud) what is the limit of CPU cores that
> I can use to run the calculation?
>
> Regards
> Yuvam Bhateja
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] [QE-user] QE on AIIDA labs

2020-05-27 Thread Ercole Loris
Hello Yuvam,


I suggest you to have a look at the documentation of the aiida-quantumespresso 
plugin:

https://aiida-quantumespresso.readthedocs.io/en/latest/

it will give you some 
examples of QE scripts run with AiiDA.


Also you can have a look at some tutorials here:

https://aiida-tutorials.readthedocs.io/en/latest/


If you have more specific questions, you can write to the AiiDA-users mailing 
list:

http://www.aiida.net/mailing-list/

aiidaus...@googlegroups.com


Best,

-Loris



---
Loris Ercole
Postdoctoral Researcher
Theory and Simulation of Materials – École Polytechnique Fédérale de Lausanne
| Address:  EPFL STI IMX THEOS, Station 9 – CH-1015 Lausanne (Switzerland)
| Office:  ME D2 1022 | Phone:  +41 21 693 1099 | 
https://people.epfl.ch/loris.ercole



From: users  on behalf of Yuvam 
Bhateja 
Sent: Wednesday, May 27, 2020 5:52:01 PM
To: users@lists.quantum-espresso.org
Subject: [QE-users] [QE-user] QE on AIIDA labs

Hey,
I was trying to run QE on AIIDA using my QE input file .in but the process 
doesn't carry on.
What kind of input does QE on aiida takes?
And while using AIIDA labs (on cloud) what is the limit of CPU cores that I can 
use to run the calculation?

Regards
Yuvam Bhateja
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] Newly-engineered Materials Cloud Archive unveiled

2020-05-27 Thread Nicola Marzari



Thank you Giovanni! Let me try to repost below with hopefully firendlier 
formatting - apologies to all for the double posting!


nicola



Dear Quantum ESPRESSO users,

we would like to announce the launch of a newly engineered Materials 
Cloud Archive , now powered by the 
same Invenio framework as the massive Zenodo 
repository at CERN.


The Materials Cloud Archive , 
active since March 2017, is a public, free, open-access repository for 
research data and tools in computational materials science and in 
related experimental efforts, inspired by the archive initiatives for 
preprints. It provides the capability to upload and persist arbitrary 
data records from anyone in the community with a minimum guaranteed 10 
year retention time per record. Currently, 0.5 petabytes are already 
allocated; the  limits for standard submissions are of 5 GB for data 
sets in any format, and of 50 GB for AiiDA databases; moderators can 
approve larger data sets upon request. Each entry is assigned a globally 
unique and persistent digital object identifier (DOI) and harvestable 
metadata. The new Invenio platform makes it easier for authors to submit 
and later update data records, provides full-text searches, and powers 
streamlined workflows for content moderation.


The Archive is an integral part of the Materials Cloud 
FAIR data infrastructure, in 
partnership with several European and national centres - these include 
the MaX Centre of Excellence, the MARVEL 
NCCR, the H2020 MarketPlace 
, NFFA , 
and Intersect projects, EMMC 
, swissuniversities 
, PASC 
,and OSSCAR . It is a 
recommended repository for Nature’s Scientific Data 
, it is 
indexed by FAIRsharing , 
Google Dataset Search 
and 
EOSC-hub /EUDAT ’s 
service B2FIND , and it is 
registered on re3data . 
Finally, it is an official implementation network 
of 
the GO FAIR initiative .


More information on the Materials Cloud integration of data, workflows 
and codes can be found in L. Talirz et al., Materials Cloud, a platform 
for open computational science, arXiv:2003.12510 (2020) 
and in S. Huber et al., AiiDA 1.0, a 
scalable computational infrastructure for automated reproducible 
workflows and data provenance, arXiv:2003.12476 (2020) 
.


The new Materials Cloud Archive infrastructure has been unveiled today 
(Wednesday 27th May 2020), during the MaX webinar (part 
of the ongoing MaX webinar series on 
advances toward exascale computing) that focused on FAIR and 
reproducible high throughput computational science as enabled by AiiDA 
and AiiDA lab, Quantum ESPRESSO and SIRIUS, and the Materials Cloud 
Archive. Videos of the presentations will be available online from 
tomorrow (28th May) on the webpage of the event 
.


With warmest regards,

Giovanni Pizzi, Nicola Marzari, and the Materials Cloud team


--
Prof Nicola Marzari, Chair of Theory and Simulation of Materials, EPFL
Director, National Centre for Competence in Research NCCR MARVEL, EPFL
http://theossrv1.epfl.ch/Main/Contact http://nccr-marvel.ch/en/project
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Dipole moment of the CO2 molecule

2020-05-27 Thread ENDALE ABEBE
Dear Dr. Giovanni Cantele

Thank you very much for this helpful information.

After modifying the input, I found that the choice of *eopreg *also affects
the result.
I found the dipole value close to zero using the attached input, but I
still don't know how to chose the value of the *eopreg. Would you mind to
give me some more explanation on the eopreg?*

With Regards


On Tue, May 26, 2020 at 6:15 AM Giovanni Cantele <
giovanni.cant...@spin.cnr.it> wrote:

> An issue with your input is that you place the point where the external
> saw-tooth potential
> has a discontinuity in its derivative at emaxpos = 0.5 (it is in crystal
> units, so because edir = 3,
> it means that the potential depends on z and the discontinuity is at
> (0,0,a/2)).
> However, the CO2 molecule is placed along the x axis at fixed y = a/2 and
> z = a/2. As such,
> the discontinuity, which is unphysical since in serves to compensate the
> spurious dipole field that
> would arise as a consequence of the periodic boundary conditions, is
> located in a region where the
> charge density is not zero. emaxpos should be set in such a way that the
> discontinuity il located
> in the vacuum, in the region where the charge density is zero or in any
> case very small. In your setup
> it seems that a good choice would be emaxpos=0.
>
> Giovanni
>
> PS since you’re studying an isolated molecule, provided the size of your
> supercell is sufficiently large,
> the eigenvalues should exhibit no dependence on k. As a consequence, in
> this case, a 4x4x4 sampling
> of the Brillouin zone, should provide results equivalent to a gamma only
> sampling. So, you can switch to
> KPOINTS automatic
> 1 1 10 0 0
> or, even better,
> KPOINTS gamma
> to make your calculation faster while keeping the same accuracy.
>
> On 27 May 2020, at 03:42, ENDALE ABEBE  wrote:
>
> Dear Experts, users and all
>
> I found the dipoles of CO2 calculated by Quantum ESPRESSO as :
>
> Computed dipole along edir(3) :
> Elec. dipole 0.3112 Ry au, 0.7909 Debye
> Ion. dipole 0.8137 Ry au, 2.0683 Debye
> Dipole 41.9812 Ry au, 106.7055 Debye
> Dipole field 0.5025 Ry au,
>
> I assumed the third value is the sum of the electronic and ionic
> contributions.
> The input and output files are attached herewith.
> Since CO2 is non-polar molecule (with polar bonds), shouldn't the total
> dipole moment be zero?
>
> --
> Endale Abebe
> Program coordinator and Lecturer
> Faculty of Materials Science and Engineering
> Jimma Institute of Technology
> Jimma University
> P.O.Box 378, Jimma, Ethiopia
> Mobile: +251921381598
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
>
>
> --
>
> Giovanni Cantele, PhD
>
> CNR-SPIN
> c/o Dipartimento di Fisica
> Universita' di Napoli "Federico II"
> Complesso Universitario M. S. Angelo - Ed. 6
> Via Cintia, I-80126, Napoli, Italy
>
> e-mail: giovanni.cant...@spin.cnr.it 
> gcant...@gmail.com
> Phone: +39 081 676910
> Skype contact: giocan74
> Web page: https://sites.google.com/view/giovanni-cantele
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users



-- 
Endale Abebe
Program coordinator and Lecturer
Faculty of Materials Science and Engineering
Jimma Institute of Technology
Jimma University
P.O.Box 378, Jimma, Ethiopia
Mobile: +251921381598


CO2_scf.in
Description: Binary data
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] efficient parallelization on a system without Infiniband

2020-05-27 Thread Paolo Giannozzi
(sorry for the previous empty email)

On Wed, May 27, 2020 at 4:27 PM Michal Krompiec 
wrote:


> So far, speedup on 4 nodes vs 1 node is 3.26x. Is it normal or does it
> look like it can be improved?
>

it looks like there isn't much space for improvement. One can figure out
how to improve things (or what hinders improvement) by looking at the final
reports with timing, but you have to know quite a bit about QE
parallelization.

Paolo

-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] efficient parallelization on a system without Infiniband

2020-05-27 Thread Paolo Giannozzi
On Wed, May 27, 2020 at 4:27 PM Michal Krompiec 
wrote:

> Hello,
> How can I minimize inter-node MPI communication in a pw.x run? My
> system doesn't have Infiniband and inter-node MPI can easily become
> the bottleneck.
> Let's say, I'm running a calculation with 4 k-points, on 4 nodes, with
> 56 MPI tasks per node. I would then use -npool 4 to create 4 pools for
> the k-point parallelization. However, it seems that the
> diagonalization is by default parallelized imperfectly (or isn't it?):
>  Subspace diagonalization in iterative solution of the eigenvalue
> problem:
>  one sub-group per band group will be used
>  scalapack distributed-memory algorithm (size of sub-group:  7*  7
> procs)
> So far, speedup on 4 nodes vs 1 node is 3.26x. Is it normal or does it
> look like it can be improved?
>
> Best regards,
>
> Michal Krompiec
> Merck KGaA
> Southampton, UK
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
>


-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] [QE-user] QE on AIIDA labs

2020-05-27 Thread Yuvam Bhateja
Hey,
I was trying to run QE on AIIDA using my QE input file .in but the process
doesn't carry on.
What kind of input does QE on aiida takes?
And while using AIIDA labs (on cloud) what is the limit of CPU cores that I
can use to run the calculation?

Regards
Yuvam Bhateja
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] Newly-engineered Materials Cloud Archive unveiled

2020-05-27 Thread Giovanni Pizzi
Dear Quantum ESPRESSO users,
we would like to announce the launch of a newly engineered Materials Cloud 
Archive, now powered by the same Invenio 
framework as the massive Zenodo repository at CERN.

The Materials Cloud Archive, active since 
March 2017, is a public, free, open-access repository for research data and 
tools in computational materials science and in related experimental efforts, 
inspired by the archive initiatives for preprints. It provides the capability 
to upload and persist arbitrary data records from anyone in the community with 
a minimum guaranteed 10-year retention time per record. Currently, 0.5 
petabytes are already allocated; the  limits for standard submissions are of 5 
GB for data sets in any format, and of 50 GB for AiiDA databases; moderators 
can approve larger data sets upon request. Each entry is assigned a globally 
unique and persistent digital object identifier (DOI) and harvestable metadata. 
The new Invenio platform makes it easier for authors to submit and later update 
data records, provides full-text searches, and powers streamlined workflows for 
content moderation.

The Archive is an integral part of the Materials 
Cloud FAIR data infrastructure, in partnership 
with several European and national centres - these include the 
MaX Centre of Excellence, the  
MARVEL NCCR, the H2020 
MarketPlace, 
NFFA, and Intersect 
projects, EMMC, 
swissuniversities, 
PASC, and OSSCAR. It is a 
recommended repository for Nature’s Scientific 
Data, it is 
indexed by FAIRsharing, Google 
Dataset 
Search
 and EOSC-hub/EUDAT’s service 
B2FIND, and it is registered on 
re3data. Finally, it is an 
official implementation 
network
 of the GO FAIR initiative.

More information on the Materials Cloud integration of data, workflows and 
codes can be found in L. Talirz et al., Materials Cloud, a platform for open 
computational science, arXiv:2003.12510 
(2020) and in S. Huber et al., AiiDA 1.0, a 
scalable computational infrastructure for automated reproducible workflows and 
data provenance, arXiv:2003.12476 (2020).

The new Materials Cloud Archive infrastructure has been unveiled today 
(Wednesday 27th May 2020), during the MaX 
webinar
 (part of the ongoing MaX webinar series on 
advances toward exascale computing) that focused on FAIR and reproducible 
high-throughput computational science as enabled by AiiDA and AiiDA lab, 
Quantum ESPRESSO and SIRIUS, and the Materials Cloud Archive. Videos of the 
presentations will be available online from tomorrow (28th May) on the webpage 
of the 
event.

With warmest regards,

Giovanni Pizzi, Nicola Marzari, and the Materials Cloud team

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] efficient parallelization on a system without Infiniband

2020-05-27 Thread Ye Luo
3.26x seems possible to me. It can be caused by load imbalance in the
iterative solver among the 4 k-points.
Could you list the time in seconds with 1 node and 4 nodes? Those you used
to calculate 3.26x.
Could you also try diago_david_ndim=2 under "" and provide 1 and
4-node time in seconds?

In addition, you may try ELPA which usually gives better performance than
scalapack.

Thanks,
Ye
===
Ye Luo, Ph.D.
Computational Science Division & Leadership Computing Facility
Argonne National Laboratory


On Wed, May 27, 2020 at 9:27 AM Michal Krompiec 
wrote:

> Hello,
> How can I minimize inter-node MPI communication in a pw.x run? My
> system doesn't have Infiniband and inter-node MPI can easily become
> the bottleneck.
> Let's say, I'm running a calculation with 4 k-points, on 4 nodes, with
> 56 MPI tasks per node. I would then use -npool 4 to create 4 pools for
> the k-point parallelization. However, it seems that the
> diagonalization is by default parallelized imperfectly (or isn't it?):
>  Subspace diagonalization in iterative solution of the eigenvalue
> problem:
>  one sub-group per band group will be used
>  scalapack distributed-memory algorithm (size of sub-group:  7*  7
> procs)
> So far, speedup on 4 nodes vs 1 node is 3.26x. Is it normal or does it
> look like it can be improved?
>
> Best regards,
>
> Michal Krompiec
> Merck KGaA
> Southampton, UK
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
>
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] efficient parallelization on a system without Infiniband

2020-05-27 Thread Michal Krompiec
Hello,
How can I minimize inter-node MPI communication in a pw.x run? My
system doesn't have Infiniband and inter-node MPI can easily become
the bottleneck.
Let's say, I'm running a calculation with 4 k-points, on 4 nodes, with
56 MPI tasks per node. I would then use -npool 4 to create 4 pools for
the k-point parallelization. However, it seems that the
diagonalization is by default parallelized imperfectly (or isn't it?):
 Subspace diagonalization in iterative solution of the eigenvalue problem:
 one sub-group per band group will be used
 scalapack distributed-memory algorithm (size of sub-group:  7*  7 procs)
So far, speedup on 4 nodes vs 1 node is 3.26x. Is it normal or does it
look like it can be improved?

Best regards,

Michal Krompiec
Merck KGaA
Southampton, UK
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users


Re: [QE-users] Huge memory requirements for a scf calculation of a system consisting with 300+ atoms.

2020-05-27 Thread Nicola Marzari



Fully agree with this! If I can add, using the Baldereschi point (or, to 
make it simple, 1/4 1/4/ 1/4 in crystal coordinates), and nosym=.true. 
allows you to do a calculation with one k-point that is almost as 
accurate as using a 2x2x2 shifted monkhorst pack mesh (i.e. 2 2 2 1 1 
1), at a cost that is only twice as large as a Gamma only calculation.


If a metal, use 0.03 Ry of smearing (either m-p or m-v), and indeed you 
are good to go for a fast but quite accurate relaxation.


nicola

On 27/05/2020 13:39, Giuseppe Mattioli wrote:


Dear all
Just to add a bit of personal experience that might be useful to others. 
Let's admit that many k-points are necessary to provide a good 
description of the electronic properties of a given system, this is 
generally true in the case of metal systems. This fact might not extend 
to the potential, and very tiny differences might be found in final 
structures optimized by using a coarser sampling of the Brillouin zone. 
In the huge cell shared by Hongyi Zhao, I would start a geometry 
optimization from a gamma-only simulation and then I would check if 
forces on ions were low enough with a 2x2x1 mesh. If this was not the 
case, I would fully optimize the system with the new mesh and go a step 
ahead, and so on up to a decent convergence of the potential. Then I 
would perform nscf calculations with increasing numbers of k points 
followed by, e.g., dos.x post processing runs, to check the convergence 
of the density of states (be careful, because AFAIK nscf runs overwrite 
the results of the scf run). Of course, all of this depends on the 
specific purpose of the calculation, but in my past experience with 
molecules on metal surfaces this strategy saves a lot of time and 
resources.

HTH
Giuseppe

Quoting Sebastian Hütter :


Hi,

This may be a stupid question, but...


 Estimated static dynamical RAM per process >   3.32 GB
 Estimated max dynamical RAM per process >  10.52 GB
 Estimated total dynamical RAM > 462.96 GB


... is this not expected behavior? I'm not super experienced, so I 
just assumed it was.


Your numbers pretty much match what I see in terms of "RAM per Cell 
volume" in metals with non-symmetric unit cells using PAW pseudos, if 
not less. Random example: 126 atoms, 63 k-points, ~1000 bands, 250³ 
dense grid FFT gives ~10GB per rank, for a total of 680GB with 64 
ranks. I actually plan node requests for our cluster based entirely on 
memory required, probably wasting CPU time along the way (4N*16C in 
the example above).


Reasonable ke and charge cutoffs seem to blow up the memory 
requirements a lot. Of course multiplied by the number of bands...



Best,

Sebastian


--
M.Sc. Sebastian Hütter
Otto-von-Guericke University Magdeburg
Institute of Materials and Joining Technology
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users




GIUSEPPE MATTIOLI
CNR - ISTITUTO DI STRUTTURA DELLA MATERIA
Via Salaria Km 29,300 - C.P. 10
I-00015 - Monterotondo Scalo (RM)
Mob (*preferred*) +39 373 7305625
Tel + 39 06 90672342 - Fax +39 06 90672316
E-mail: 

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users



--
--
Prof Nicola Marzari, Chair of Theory and Simulation of Materials, EPFL
Director, National Centre for Competence in Research NCCR MARVEL, EPFL
http://theossrv1.epfl.ch/Main/Contact http://nccr-marvel.ch/en/project
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] [SUSPECT ATTACHMENT REMOVED] QE-GPU version 6.5: cegterg cannot allocate vc_d

2020-05-27 Thread Pietro Bonfa

Dear Simone,

which version of the code are you using? A memory leak in forces has 
been fixed in v6.5a2.


Let me also mention that, as you probably know, the estimate provided 
for RAM approximates well also the amount of global memory required on 
the GPU, which however is smaller than the RAM, in general.

Finally, keep in mind that the code reports a lower estimate.

Hope this helps,
best,
Pietro

On 5/27/20 2:01 PM, Simone Del Puppo wrote:

Hi everybody,
I have a problem with version 6.5 of quantum espresso with gpu.
I run the relax simulation ( input file attached) but after some bfgs 
steps it stops with the following error message:

  %%%
%%%
      task #         1
      from  cegterg  : error #         1
       cannot allocate vc_d
  %%%
%%%
It seems to be a memory problem but the RAM estimate in output file:
      Estimated max dynamical RAM per process >       8.60 GB
      Estimated total dynamical RAM >      34.38 GB

Is much less then the allocated ones!
Can someone kindly help me, please?? Input file is attached.
Thank you in advance.

Best,
Simone


Simone Del Puppo
PhD
Department of physics,
University of Trieste


___
Quantum ESPRESSO is supported by MaX 
(https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.max-centre.eu%2Fquantum-espressodata=02%7C01%7Cpietro.bonfa%40unipr.it%7C4e3a117b97994e6bcd1b08d80235b7b5%7Cbb064bc5b7a841ecbabed7beb3faeb1c%7C0%7C0%7C637261777460020582sdata=FHkarGNM3juXlShE7nXPvEVe9uddqAQh9Kta84dqV4g%3Dreserved=0)
users mailing list users@lists.quantum-espresso.org
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.quantum-espresso.org%2Fmailman%2Flistinfo%2Fusersdata=02%7C01%7Cpietro.bonfa%40unipr.it%7C4e3a117b97994e6bcd1b08d80235b7b5%7Cbb064bc5b7a841ecbabed7beb3faeb1c%7C0%7C0%7C637261777460030578sdata=jMPVFisYZPQ434SO7MDB5TBYbvQ9Ha2NblqJaXKj4N0%3Dreserved=0


___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] [SUSPECT ATTACHMENT REMOVED] QE-GPU version 6.5: cegterg cannot allocate vc_d

2020-05-27 Thread Simone Del Puppo
Hi everybody,
I have a problem with version 6.5 of quantum espresso with gpu.
I run the relax simulation ( input file attached) but after some bfgs steps
it stops with the following error message:
 %%%
%%%
 task # 1
 from  cegterg  : error # 1
  cannot allocate vc_d
 %%%
%%%
It seems to be a memory problem but the RAM estimate in output file:
 Estimated max dynamical RAM per process >   8.60 GB
 Estimated total dynamical RAM >  34.38 GB

Is much less then the allocated ones!
Can someone kindly help me, please?? Input file is attached.
Thank you in advance.

Best,
Simone


Simone Del Puppo
PhD
Department of physics,
University of Trieste
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Huge memory requirements for a scf calculation of a system consisting with 300+ atoms.

2020-05-27 Thread Giuseppe Mattioli


Dear all
Just to add a bit of personal experience that might be useful to  
others. Let's admit that many k-points are necessary to provide a good  
description of the electronic properties of a given system, this is  
generally true in the case of metal systems. This fact might not  
extend to the potential, and very tiny differences might be found in  
final structures optimized by using a coarser sampling of the  
Brillouin zone. In the huge cell shared by Hongyi Zhao, I would start  
a geometry optimization from a gamma-only simulation and then I would  
check if forces on ions were low enough with a 2x2x1 mesh. If this was  
not the case, I would fully optimize the system with the new mesh and  
go a step ahead, and so on up to a decent convergence of the  
potential. Then I would perform nscf calculations with increasing  
numbers of k points followed by, e.g., dos.x post processing runs, to  
check the convergence of the density of states (be careful, because  
AFAIK nscf runs overwrite the results of the scf run). Of course, all  
of this depends on the specific purpose of the calculation, but in my  
past experience with molecules on metal surfaces this strategy saves a  
lot of time and resources.

HTH
Giuseppe

Quoting Sebastian Hütter :


Hi,

This may be a stupid question, but...


 Estimated static dynamical RAM per process >   3.32 GB
 Estimated max dynamical RAM per process >  10.52 GB
 Estimated total dynamical RAM > 462.96 GB


... is this not expected behavior? I'm not super experienced, so I  
just assumed it was.


Your numbers pretty much match what I see in terms of "RAM per Cell  
volume" in metals with non-symmetric unit cells using PAW pseudos,  
if not less. Random example: 126 atoms, 63 k-points, ~1000 bands,  
250³ dense grid FFT gives ~10GB per rank, for a total of 680GB with  
64 ranks. I actually plan node requests for our cluster based  
entirely on memory required, probably wasting CPU time along the way  
(4N*16C in the example above).


Reasonable ke and charge cutoffs seem to blow up the memory  
requirements a lot. Of course multiplied by the number of bands...



Best,

Sebastian


--
M.Sc. Sebastian Hütter
Otto-von-Guericke University Magdeburg
Institute of Materials and Joining Technology
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users




GIUSEPPE MATTIOLI
CNR - ISTITUTO DI STRUTTURA DELLA MATERIA
Via Salaria Km 29,300 - C.P. 10
I-00015 - Monterotondo Scalo (RM)
Mob (*preferred*) +39 373 7305625
Tel + 39 06 90672342 - Fax +39 06 90672316
E-mail: 

___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Still not a group! symmetry disabled

2020-05-27 Thread mkondrin

On 27.05.2020 14:29, Paolo Giannozzi wrote:
On Wed, May 27, 2020 at 11:19 AM mkondrin > wrote:


  Program PWSCF v.6.4 starts on 27May2020 at 15:10: 5
[...]
  Error in routine  good_fft_order (2050):
   fft order too large


I think this is a bug fixed in subsequent versions. Already v.6.4.1 
does not yield this message


Paolo

Thank you. I will upgrade my QE installation.

Sincerely yours,
M. Kondrin



  stopping ...

The imput file is attached below. However, if I  change the
coordinate
of last atom to:

C   0.500   0.875  0.777857142857143

the job completes OK.

Sincerely yours,
M. V. Kondrin


title='An (100) twin',
calculation='vc-relax',
prefix='twin',
tstress=.true.,
tprnfor=.true.,
disk_io='low',
pseudo_dir = '../../../QE/pseudo',
outdir='./tmp'
/

   ibrav = 0
   A =   2.5200
   nat = 32
   ntyp = 1
   tot_charge=0,
   ecutwfc=70,
   occupations='smearing',
   smearing='methfessel-paxton',
   degauss=0.02
/

 mixing_beta = 0.7,
 conv_thr =  1.0d-5
/

/

cell_factor=4,
press=0.0
/
ATOMIC_SPECIES
C   12.01060C.pbe-mt_fhi.UPF

K_POINTS {automatic}
2 2 1 0 0 0

CELL_PARAMETERS {alat}
   1.000   0.000   0.000
   0.000   2.000   0.000
   0.000   0.000   5.950

ATOMIC_POSITIONS {crystal}
C   0.500   0.875  0.208
C   0.000   0.125  0.089285714285714
C   0.500   0.025  0.029761904761905
C   0.000   0.875  0.148809523809524
C   0.500   0.775  0.446428571428571
C   0.000   0.125  0.327380952380952
C   0.500   0.125  0.267857142857143
C   0.000   0.875  0.386904761904762
C   0.000   0.625  0.089285714285714
C   0.500   0.375  0.208
C   0.000   0.375  0.148809523809524
C   0.500   0.725  0.029761904761905
C   0.000   0.625  0.327380952380952
C   0.500   0.475  0.446428571428571
C   0.000   0.375  0.386904761904762
C   0.500   0.625  0.267857142857143
C   0.500   0.125  0.708
C   0.000   0.375  0.589285714285714
C   0.500   0.275  0.529761904761905
C   0.000   0.125  0.648809523809524
C   0.500   0.225  0.946428571428571
C   0.000   0.375  0.827380952380952
C   0.500   0.375  0.767857142857143
C   0.000   0.125  0.886904761904762
C   0.000   0.875  0.589285714285714
C   0.500   0.625  0.708
C   0.000   0.625  0.648809523809524
C   0.500   0.975  0.529761904761905
C   0.000   0.875  0.827380952380952
C   0.500   0.525  0.946428571428571
C   0.000   0.625  0.886904761904762
C   0.500   0.875  0.767857142857143
___
Quantum ESPRESSO is supported by MaX
(www.max-centre.eu/quantum-espresso
)
users mailing list users@lists.quantum-espresso.org

https://lists.quantum-espresso.org/mailman/listinfo/users



--
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222



___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users


___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users


Re: [QE-users] Huge memory requirements for a scf calculation of a system consisting with 300+ atoms.

2020-05-27 Thread Sebastian Hütter

Hi,

This may be a stupid question, but...


  Estimated static dynamical RAM per process >   3.32 GB
  Estimated max dynamical RAM per process >  10.52 GB
  Estimated total dynamical RAM > 462.96 GB


... is this not expected behavior? I'm not super experienced, so I just assumed 
it was.

Your numbers pretty much match what I see in terms of "RAM per Cell volume" in metals with non-symmetric unit cells 
using PAW pseudos, if not less. Random example: 126 atoms, 63 k-points, ~1000 bands, 250³ dense grid FFT gives ~10GB per 
rank, for a total of 680GB with 64 ranks. I actually plan node requests for our cluster based entirely on memory 
required, probably wasting CPU time along the way (4N*16C in the example above).


Reasonable ke and charge cutoffs seem to blow up the memory requirements a lot. Of course multiplied by the number of 
bands...



Best,

Sebastian


--
M.Sc. Sebastian Hütter
Otto-von-Guericke University Magdeburg
Institute of Materials and Joining Technology
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Still not a group! symmetry disabled

2020-05-27 Thread Paolo Giannozzi
On Wed, May 27, 2020 at 11:19 AM mkondrin  wrote:

  Program PWSCF v.6.4 starts on 27May2020 at 15:10: 5
> [...]
>   Error in routine  good_fft_order (2050):
>fft order too large
>

I think this is a bug fixed in subsequent versions. Already v.6.4.1 does
not yield this message

Paolo

>
>   stopping ...
>
> The imput file is attached below. However, if I  change the coordinate
> of last atom to:
>
> C   0.500   0.875   0.777857142857143
>
> the job completes OK.
>
> Sincerely yours,
> M. V. Kondrin
>
> 
> title='An (100) twin',
> calculation='vc-relax',
> prefix='twin',
> tstress=.true.,
> tprnfor=.true.,
> disk_io='low',
> pseudo_dir = '../../../QE/pseudo',
> outdir='./tmp'
> /
> 
>ibrav = 0
>A =   2.5200
>nat = 32
>ntyp = 1
>tot_charge=0,
>ecutwfc=70,
>occupations='smearing',
>smearing='methfessel-paxton',
>degauss=0.02
> /
> 
>  mixing_beta = 0.7,
>  conv_thr =  1.0d-5
> /
> 
> /
> 
> cell_factor=4,
> press=0.0
> /
> ATOMIC_SPECIES
> C   12.01060C.pbe-mt_fhi.UPF
>
> K_POINTS {automatic}
> 2 2 1 0 0 0
>
> CELL_PARAMETERS {alat}
>1.000   0.000   0.000
>0.000   2.000   0.000
>0.000   0.000   5.950
>
> ATOMIC_POSITIONS {crystal}
> C   0.500   0.875   0.208
> C   0.000   0.125   0.089285714285714
> C   0.500   0.025   0.029761904761905
> C   0.000   0.875   0.148809523809524
> C   0.500   0.775   0.446428571428571
> C   0.000   0.125   0.327380952380952
> C   0.500   0.125   0.267857142857143
> C   0.000   0.875   0.386904761904762
> C   0.000   0.625   0.089285714285714
> C   0.500   0.375   0.208
> C   0.000   0.375   0.148809523809524
> C   0.500   0.725   0.029761904761905
> C   0.000   0.625   0.327380952380952
> C   0.500   0.475   0.446428571428571
> C   0.000   0.375   0.386904761904762
> C   0.500   0.625   0.267857142857143
> C   0.500   0.125   0.708
> C   0.000   0.375   0.589285714285714
> C   0.500   0.275   0.529761904761905
> C   0.000   0.125   0.648809523809524
> C   0.500   0.225   0.946428571428571
> C   0.000   0.375   0.827380952380952
> C   0.500   0.375   0.767857142857143
> C   0.000   0.125   0.886904761904762
> C   0.000   0.875   0.589285714285714
> C   0.500   0.625   0.708
> C   0.000   0.625   0.648809523809524
> C   0.500   0.975   0.529761904761905
> C   0.000   0.875   0.827380952380952
> C   0.500   0.525   0.946428571428571
> C   0.000   0.625   0.886904761904762
> C   0.500   0.875   0.767857142857143
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
>


-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Still not a group! symmetry disabled

2020-05-27 Thread mkondrin

On 27.05.2020 14:05, Lorenzo Paulatto wrote:


Hello Anonymous,
this is really tiny:

   A =   2.5200


kind regards



Hello Lorenzo,

This  does not influence the result. If I increase granularity of the 
k-mesh  (to say 4 4 2 0 0 0) the error persists.


Sincerely yours,
M.V. Kondrin
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users


Re: [QE-users] Still not a group! symmetry disabled

2020-05-27 Thread Lorenzo Paulatto


Hello Anonymous,
this is really tiny:

   A =   2.5200


kind regards


--
Lorenzo Paulatto - Paris
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

[QE-users] Still not a group! symmetry disabled

2020-05-27 Thread mkondrin

Dear QE developers and users!

I have encountered a strange error. It did not produce CRASH file in the 
working directory but the job stops. In the end of output file there are 
error messages:


 Program PWSCF v.6.4 starts on 27May2020 at 15:10: 5

 This program is part of the open-source Quantum ESPRESSO suite
 for quantum simulation of materials; please cite
 "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
 "P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017);
  URL http://www.quantum-espresso.org;,
 in publications or presentations arising from this work. More 
details at

 http://www.quantum-espresso.org/quote

 Parallel version (MPI), running on 8 processors
 R & G space division:  proc/nbgrp/npool/nimage =   8
 Waiting for input...
 Reading input from standard input

 Current dimensions of program PWSCF are:
 Max number of different atomic species (ntypx) = 10
 Max number of k-points (npk) =  4
 Max angular momentum in pseudopotentials (lmaxx) =  3
   file C.pbe-mt_fhi.UPF: wavefunction(s)  4f renormalized

 Subspace diagonalization in iterative solution of the eigenvalue 
problem:

 one sub-group per band group will be used
 scalapack distributed-memory algorithm (size of sub-group:  2* 2 
procs)


 Message from routine find_sym:
 Not a group! Trying with lower acceptance parameter...
 Message from routine find_sym:
 Still not a group! symmetry disabled

 %%
 Error in routine  good_fft_order (2050):
  fft order too large
 %%

 stopping ...

The imput file is attached below. However, if I  change the coordinate 
of last atom to:


C   0.500   0.875   0.777857142857143

the job completes OK.

Sincerely yours,
M. V. Kondrin


title='An (100) twin',
calculation='vc-relax',
prefix='twin',
tstress=.true.,
tprnfor=.true.,
disk_io='low',
pseudo_dir = '../../../QE/pseudo',
outdir='./tmp'
/

  ibrav = 0
  A =   2.5200
  nat = 32
  ntyp = 1
  tot_charge=0,
  ecutwfc=70,
  occupations='smearing',
  smearing='methfessel-paxton',
  degauss=0.02
/

mixing_beta = 0.7,
conv_thr =  1.0d-5
/

/

cell_factor=4,
press=0.0
/
ATOMIC_SPECIES
   C   12.01060C.pbe-mt_fhi.UPF

K_POINTS {automatic}
2 2 1 0 0 0

CELL_PARAMETERS {alat}
  1.000   0.000   0.000
  0.000   2.000   0.000
  0.000   0.000   5.950

ATOMIC_POSITIONS {crystal}
C   0.500   0.875   0.208
C   0.000   0.125   0.089285714285714
C   0.500   0.025   0.029761904761905
C   0.000   0.875   0.148809523809524
C   0.500   0.775   0.446428571428571
C   0.000   0.125   0.327380952380952
C   0.500   0.125   0.267857142857143
C   0.000   0.875   0.386904761904762
C   0.000   0.625   0.089285714285714
C   0.500   0.375   0.208
C   0.000   0.375   0.148809523809524
C   0.500   0.725   0.029761904761905
C   0.000   0.625   0.327380952380952
C   0.500   0.475   0.446428571428571
C   0.000   0.375   0.386904761904762
C   0.500   0.625   0.267857142857143
C   0.500   0.125   0.708
C   0.000   0.375   0.589285714285714
C   0.500   0.275   0.529761904761905
C   0.000   0.125   0.648809523809524
C   0.500   0.225   0.946428571428571
C   0.000   0.375   0.827380952380952
C   0.500   0.375   0.767857142857143
C   0.000   0.125   0.886904761904762
C   0.000   0.875   0.589285714285714
C   0.500   0.625   0.708
C   0.000   0.625   0.648809523809524
C   0.500   0.975   0.529761904761905
C   0.000   0.875   0.827380952380952
C   0.500   0.525   0.946428571428571
C   0.000   0.625   0.886904761904762
C   0.500   0.875   0.767857142857143
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users


[QE-users] Environ and nmr calculations

2020-05-27 Thread Thomas Verrijdt
Dear all,

I have been using quantum espresso to calculate nmr parameters on modified Ti 
surfaces, I did all these in vacuum, but now I'm using the Environ module as a 
way to simulate solvent (water in this case) to look at the influence on the 
nmr parameters.

After using environ for a geometry optimalisation, I again did the nmr 
calculation.
The output file of the nmr calculations gives the following error message even 
though the geometry optimalisation was complete:

Rotating WFCS
c_bands: 1 eigenvalues not converged
ATTENTION: ik=   1   ibnd=   1   eigenvalues differ to much!
(repeated attention message)

My question then is:
Is it wrong to use the environ module followed by an nmr calculation? Or is the 
continuum added by the environ module not included in the following nmr 
calculation resulting in a not-optimised structure because it is placed back 
into vacuum?

Thanks in advance,
Thomas Verrijdt
(student master in Chemistry, University of Antwerp)
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Run time error

2020-05-27 Thread Paolo Giannozzi
On Wed, May 27, 2020 at 8:47 AM Pooja Vyas  wrote:

Can just removing the spaces solve the issue?
>

removing, no. Adding a space where it is needed, yes. More exactly: it will
solve the problem of the "bad read". I don't know whether such a large
imaginary term is a sign of some other problem, though. For sure the part
of the code that writes dynamical matrices must be fixed

Paolo



Any kind of help is appreciated
>
> On Wed, May 27, 2020 at 12:12 PM Lorenzo Monacelli <
> lorenzo.monace...@roma1.infn.it> wrote:
>
>> For example in line 19:
>>
>>  -7.61366009-13.187246113.04861427  5.28035480
>> -7.61366009-13.18724611
>>
>> The space between the real and imaginary part of the last complex number
>> is so small that they merge together (and FORTRAN complains).
>>
>> Maybe simply adding a white space there (and in all the multiple places
>> it happens) can solve the issue.
>> How did you generate dynamical matrices? If they were done with ph.x,
>> then maybe this is a bug that should be fixed (and it should be super
>> easy).
>>
>> It is also true that, looking at the negative imaginary modes you are
>> obtaining, probably something wrong happened, like a very under converged
>> choice of the pw.x parameters, a wrong cell or atomic positions, etc...
>>
>> Bests,
>>
>> Lorenzo
>> On 27/05/20 08:32, Pooja Vyas wrote:
>>
>> Dear users,
>> Attached file is 6.1141.dyn3. But couldn't find any error.
>>
>> On Wed, May 27, 2020 at 11:56 AM Paolo Giannozzi 
>> wrote:
>>
>>> On Wed, May 27, 2020 at 7:19 AM Pooja Vyas 
>>> wrote:
>>>
 At line 273 of file io_dyn_mat_old.f90 (unit = 1, file = '6.1141.dyn3')
 Fortran runtime error: Bad real number in item 3 of list input

 look into file 6.1141.dyn3: likely you will notice something anomalous.
>>> The error message also tells you where the "bad real number" is read
>>>
>>> Paolo
>>> --
>>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>>> Phone +39-0432-558216, fax +39-0432-558222
>>>
>>> ___
>>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso
>>> )
>>> users mailing list users@lists.quantum-espresso.org
>>> https://lists.quantum-espresso.org/mailman/listinfo/users
>>
>>
>> ___
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
>> users mailing list 
>> users@lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>>
>> ___
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
>> users mailing list users@lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users



-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Run time error

2020-05-27 Thread Pooja Vyas
Respected sir,
My equilibrium lattice parameter is 9.1334 a.u. But I want to compute
phonons on compressing and relaxing the structure, hence the above
calculation is done at 6.1141 a.u. This could be the reason for the
imaginary modes that you are talking about.
Can just removing the spaces solve the issue?
Any kind of help is appreciated

On Wed, May 27, 2020 at 12:12 PM Lorenzo Monacelli <
lorenzo.monace...@roma1.infn.it> wrote:

> For example in line 19:
>
>  -7.61366009-13.187246113.04861427  5.28035480
> -7.61366009-13.18724611
>
> The space between the real and imaginary part of the last complex number
> is so small that they merge together (and FORTRAN complains).
>
> Maybe simply adding a white space there (and in all the multiple places it
> happens) can solve the issue.
> How did you generate dynamical matrices? If they were done with ph.x, then
> maybe this is a bug that should be fixed (and it should be super easy).
>
> It is also true that, looking at the negative imaginary modes you are
> obtaining, probably something wrong happened, like a very under converged
> choice of the pw.x parameters, a wrong cell or atomic positions, etc...
>
> Bests,
>
> Lorenzo
> On 27/05/20 08:32, Pooja Vyas wrote:
>
> Dear users,
> Attached file is 6.1141.dyn3. But couldn't find any error.
>
> On Wed, May 27, 2020 at 11:56 AM Paolo Giannozzi 
> wrote:
>
>> On Wed, May 27, 2020 at 7:19 AM Pooja Vyas 
>> wrote:
>>
>>> At line 273 of file io_dyn_mat_old.f90 (unit = 1, file = '6.1141.dyn3')
>>> Fortran runtime error: Bad real number in item 3 of list input
>>>
>>> look into file 6.1141.dyn3: likely you will notice something anomalous.
>> The error message also tells you where the "bad real number" is read
>>
>> Paolo
>> --
>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>> Phone +39-0432-558216, fax +39-0432-558222
>>
>> ___
>> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
>> users mailing list users@lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list 
> users@lists.quantum-espresso.orghttps://lists.quantum-espresso.org/mailman/listinfo/users
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Run time error

2020-05-27 Thread Lorenzo Monacelli

For example in line 19:

 -7.61366009-13.18724611    3.04861427  5.28035480 -7.61366009-13.18724611

The space between the real and imaginary part of the last complex number 
is so small that they merge together (and FORTRAN complains).


Maybe simply adding a white space there (and in all the multiple places 
it happens) can solve the issue.
How did you generate dynamical matrices? If they were done with ph.x, 
then maybe this is a bug that should be fixed (and it should be super 
easy).


It is also true that, looking at the negative imaginary modes you are 
obtaining, probably something wrong happened, like a very under 
converged choice of the pw.x parameters, a wrong cell or atomic 
positions, etc...


Bests,

Lorenzo

On 27/05/20 08:32, Pooja Vyas wrote:

Dear users,
Attached file is 6.1141.dyn3. But couldn't find any error.

On Wed, May 27, 2020 at 11:56 AM Paolo Giannozzi 
mailto:p.gianno...@gmail.com>> wrote:


On Wed, May 27, 2020 at 7:19 AM Pooja Vyas mailto:poojavyas...@gmail.com>> wrote:

At line 273 of file io_dyn_mat_old.f90 (unit = 1, file =
'6.1141.dyn3') Fortran runtime error: Bad real number in item
3 of list input

look into file 6.1141.dyn3: likely you will notice something
anomalous. The error message also tells you where the "bad real
number" is read

Paolo
-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,

Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222

___
Quantum ESPRESSO is supported by MaX
(www.max-centre.eu/quantum-espresso
)
users mailing list users@lists.quantum-espresso.org

https://lists.quantum-espresso.org/mailman/listinfo/users


___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Run time error

2020-05-27 Thread Pooja Vyas
Dear users,
Attached file is 6.1141.dyn3. But couldn't find any error.

On Wed, May 27, 2020 at 11:56 AM Paolo Giannozzi 
wrote:

> On Wed, May 27, 2020 at 7:19 AM Pooja Vyas  wrote:
>
>> At line 273 of file io_dyn_mat_old.f90 (unit = 1, file = '6.1141.dyn3')
>> Fortran runtime error: Bad real number in item 3 of list input
>>
>> look into file 6.1141.dyn3: likely you will notice something anomalous.
> The error message also tells you where the "bad real number" is read
>
> Paolo
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
> ___
> Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
> users mailing list users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users


6.1141.dyn3
Description: Binary data
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Run time error

2020-05-27 Thread Paolo Giannozzi
On Wed, May 27, 2020 at 7:19 AM Pooja Vyas  wrote:

> At line 273 of file io_dyn_mat_old.f90 (unit = 1, file = '6.1141.dyn3')
> Fortran runtime error: Bad real number in item 3 of list input
>
> look into file 6.1141.dyn3: likely you will notice something anomalous.
The error message also tells you where the "bad real number" is read

Paolo
-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
Quantum ESPRESSO is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users