Re: [Pw_forum] Unit-cell and Super-cell bandgap difference

2016-11-23 Thread dario rocca
Dear Vipul,
how many k-points did you use for the cell and for the supercell
Best
Dario

On Thu, Nov 24, 2016 at 6:23 AM, Vipul Shivaji Ghemud <
vi...@physics.unipune.ac.in> wrote:

> Hi all,
> I am working on a system of 9atoms in a unit-cell having bandgap of 3.5eV;
> but I am considering the super-cell(4 unit-cells) then the bandgap is
> reduced by ~0.45eV for HSE06. The bandgaps are similar with GGA. Is it due
> to the exchnage-correlation contribution of the increased number of
> electrons in the system, as the code QE considers unit-cell and super-cell
> both a a single system? It's a bulk cubic system. I am facing similar
> problem with other systems also.
>
>
>
> --
> Vipul S. Ghemud
> Ph.D. student.
> Dept of Physics,
> SPPU, Ganeshkhind,
> Pune- 411007.
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Unit-cell and Super-cell bandgap difference

2016-11-23 Thread Vipul Shivaji Ghemud
Hi all,
I am working on a system of 9atoms in a unit-cell having bandgap of 3.5eV;
but I am considering the super-cell(4 unit-cells) then the bandgap is
reduced by ~0.45eV for HSE06. The bandgaps are similar with GGA. Is it due
to the exchnage-correlation contribution of the increased number of
electrons in the system, as the code QE considers unit-cell and super-cell
both a a single system? It's a bulk cubic system. I am facing similar
problem with other systems also.



-- 
Vipul S. Ghemud
Ph.D. student.
Dept of Physics,
SPPU, Ganeshkhind,
Pune- 411007.


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] the error: "ortho went bananas" in cp.x run

2016-11-23 Thread 周凯旋
Dear Paolo,When I use Gram-Schmidt, I am able to run cp.x .Before I run cp.x 
,the  atomic coordinate is orderd. But when I use Gram-Schmidt,I don't know why 
the atomic coordinate is disorderd. That means the molecular structure is 
disorderd.I try to change the input file description (reduce the timestep) .It 
has no effect . Here is the orderd  atomic coordinate. 1   3.83920
4.368506.19250  1  12.622204.368506.19250  
1   3.83920   13.151506.19250  1  12.62220   
13.151506.19250  1   3.839204.36850   18.89150  
1  12.622204.36850   18.89150  1   3.83920   
13.15150   18.89150  1  12.62220   13.15150   18.89150  
1   8.110304.270405.65760  1   3.70050
8.768106.46850  1   8.11030   13.053405.65760  
1  12.483508.768106.46850Here is the disorderd  atomic 
coordinate. 1   -2974.896707510 2996.209985565*** 1   
-2974.896707510 2996.209985565*** 1
2982.082731868-2967.402557190*** 1
2984.827099434-2965.793683043*** 1
2984.827099434-2965.793683043*** 1
2984.827099434-2965.793683043*** 12981.776364252 
2993.428012641***  1   -2961.507388927 
2981.577264955***  1   -2961.507388927 2981.577264955***

 School of Renewable Energy, North China Electric Power University,
 Beijing, 102206, China


>Try to start the run with orthogonalization='Gram-Schmidt'
>Paolo >On Fri, Nov 4, 2016 at 7:27 AM, ??? <13051613...@163.com> wrote: >> 
>Dear all! >> >> >> I am trying to run the cp.x for my system. But always get a 
>>> error messages: >> **Error in routine ortho(1): *ortho went bananas*.** >> 
>I have looked at the manal. It show that if it doesn't converge >> reduce the 
>timestep, or use options ortho_max and ortho_eps. But >> no matter how i 
>adjust parameter, it doesn't work. >> >> Here i am attaching my cp.x input 
>file for your kind reference. >> >>  >> calculation = 'cp' >> dt = 2.0 
>iprint = 10 isave = 100 >> ndw = 53 >> ndr = 52 >> outdir = './out/'
>>> nstep = 1
>> prefix = 'ABX3wfopt' restart_mode = 'restart' verbosity = 'high'
>> wf_collect = .false.
>> ekin_conv_thr=1.e-5, etot_conv_thr=1.e-7, forc_conv_thr=1.e-5,
>> pseudo_dir='./' / 
>> celldm(3)=1.4459
>> ecutrho = 120.0 ecutwfc = 30.0 ibrav = 7 celldm(1)=33.1948 nat = 96
>> ntyp = 5 nr1b = 24 nr2b = 24 nr3b = 24 /
>> emass_cutoff = 3.0
>>  electron_damping = 0.1 electron_dynamics = 'damp' emass = 400
>> electron_temperature = 'not_controlled' /
>> ion_temperature='not_controlled'
>>  ion_dynamics = 'damp' ion_damping=0.02 /
>> H 1.0 H.pbe-rrkjus_psl.0.1.UPF
>> ATOMIC_SPECIES Pb 207.2 Pb.pbe-dn-rrkjus_psl.0.2.2.UPF I 126.9
>> I.pbe-n-rrkjus_psl.0.2.UPF C 12.0 C.pbe-n-rrkjus_psl.0.1.UPF N 14.0
>> N.pbe-n-rrkjus_psl.0.1.UPF
>> ATOMIC_POSITIONS angstrom
>> Pb 4.2403 4.7040 6.2142
>> Pb 13.0233 4.7040 6.2142
>> Pb 4.2403 13.4870 6.2142
>> **
>> H 1.6831 0.8538 27.9733
>> H 10.6759 -1.1096 25.9846
>> H 10.4661 0.8538 27.9733
>> Best regards,
>>
>> School of Renewable Energy, North China Electric Power University,
>> Beijing, 102206, China
__
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum

>-- 
>Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>Univ. Udine, via delle Scienze 208, 33100 Udine, Italy

>Phone +39-0432-558216, fax +39-0432-558222
>___
>Pw_forum mailing list
>Pw_forum@pwscf.org
>http://pwscf.org/mailman/listinfo/pw_forum



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Convergence

2016-11-23 Thread Максим Арсентьев
Dear all,

I'm trying to calculate difference in total energy of two structures of TM
silicate, but every time get different result.
I used internal optimization "relax" and then "vc-relax", only "vc-relax"
etot_conv_thr 1.0d-4, 1.0d-5, 1.0d-3
force conv threshold 1.0d-3
Cell factor 1.5, 2.0, omit cell factor
I found two papers where these results differ from each other and from mine
results
Any help?

Thank you
Maxim
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] epsilon.x

2016-11-23 Thread Andrea Ferretti



Dear Pashangpour,

what I meant is that the meaning of the variables intersmear and 
intrasmear of epsilon is quite similar to that of degauss in DOS or 
projwfc, and that convergences should be studied in a similar way...
(note that very often kpt meshes denser than those used for scf runs are 
needed to converge DOS/projwfc/epsilon calculations)


Andrea


I mean inter and intera smearings in inputpp of epsilon calculation not in 
inputscf. Do you mean I can set them similar
kpt, Ecuts and smearing in scf input?
Thanks
M Pashangpour
IIAU,Tehran,Iran

Sent from my iPhone

On 23 Nov 2016, at 19:56, Andrea Ferretti  wrote:



  Dear Pashangpour,

  the idea is very similar to the usual kpt convergence
  vs smearing parameter for a DOS calculation.

  At variance with scf runs, here you are computing a spectral quantity
  (the dielectric function as a function of the frequency), meaning that you
  may need a (much) finer mesh of kpts.

  In general, the larger the smearing, the lower the resolution of your
  spectrum (in the simplest case you are replacing dirac's deltas with
  gaussians), while the larger the kpt mesh that you use, the smaller the
  smearing parameter can be...

  I would follow a recipe like this:
  * set the smearing and converge the spectrum wrt kpts
  * if the accuracy of the spectrum (ie the resolution of its features) is
    ok with you, exit(),
    otherwise reduce the smearing parameter and iterate
  * by reducing the smearing you should expect to converge with a denser
    mesh of kpts  (a rule of thumb could be dk * delta ~ constant, where
    dk is the kpt grid spacing and delta the smearing parameter.. though
    it probably depends on the system and on your requirements)

  Andrea


Dear all

How can I find suitable value of intersmear and intrasmear in 
epsioln calculation via epsilon.x?

Thanks in advance

M. Pashangpour

PhD of physics

IAU,Tehran,Iran


Sent from my iPhone

___

Pw_forum mailing list

Pw_forum@pwscf.org

http://pwscf.org/mailman/listinfo/pw_forum



  --
  Andrea Ferretti, PhD
  S3 Center, Istituto Nanoscienze, CNR
  via Campi 213/A, 41125, Modena, Italy
  Tel: +39 059 2055322;  Skype: andrea_ferretti
  URL: http://www.nano.cnr.it

  ___
  Pw_forum mailing list
  Pw_forum@pwscf.org
  http://pwscf.org/mailman/listinfo/pw_forum





--
Andrea Ferretti, PhD
S3 Center, Istituto Nanoscienze, CNR
via Campi 213/A, 41125, Modena, Italy
Tel: +39 059 2055322;  Skype: andrea_ferretti
URL: http://www.nano.cnr.it
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Total energy calculation

2016-11-23 Thread Максим Арсентьев
Dear all,

I'm trying to calculate difference in total energy of two structures of TM
silicate, but every time get different result.
I used internal optimization "relax" and then "vc-relax", only "vc-relax"
etot_conv_thr 1.0d-4, 1.0d-5, 1.0d-3
force conv threshold 1.0d-3
Cell factor 1.5, 2.0, omit cell factor
I found two papers where these results differ from each other and from mine
results
Any help?

Thank you
Maxim
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] epsilon.x

2016-11-23 Thread mansourehp
Dear Ferretti
I mean inter and intera smearings in inputpp of epsilon calculation not in 
inputscf. Do you mean I can set them similar kpt, Ecuts and smearing in scf 
input?
Thanks
M Pashangpour
IIAU,Tehran,Iran

Sent from my iPhone

> On 23 Nov 2016, at 19:56, Andrea Ferretti  wrote:
> 
> 
> 
> Dear Pashangpour,
> 
> the idea is very similar to the usual kpt convergence 
> vs smearing parameter for a DOS calculation.
> 
> At variance with scf runs, here you are computing a spectral quantity 
> (the dielectric function as a function of the frequency), meaning that you 
> may need a (much) finer mesh of kpts.
> 
> In general, the larger the smearing, the lower the resolution of your 
> spectrum (in the simplest case you are replacing dirac's deltas with 
> gaussians), while the larger the kpt mesh that you use, the smaller the 
> smearing parameter can be...
> 
> I would follow a recipe like this:
> * set the smearing and converge the spectrum wrt kpts
> * if the accuracy of the spectrum (ie the resolution of its features) is
>   ok with you, exit(),
>   otherwise reduce the smearing parameter and iterate
> * by reducing the smearing you should expect to converge with a denser
>   mesh of kpts  (a rule of thumb could be dk * delta ~ constant, where
>   dk is the kpt grid spacing and delta the smearing parameter.. though
>   it probably depends on the system and on your requirements)
> 
> Andrea
> 
> 
>> Dear all
>> How can I find suitable value of intersmear and intrasmear in epsioln 
>> calculation via epsilon.x?
>> Thanks in advance
>> M. Pashangpour
>> PhD of physics
>> IAU,Tehran,Iran
>> 
>> Sent from my iPhone
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
> 
> -- 
> Andrea Ferretti, PhD
> S3 Center, Istituto Nanoscienze, CNR
> via Campi 213/A, 41125, Modena, Italy
> Tel: +39 059 2055322;  Skype: andrea_ferretti
> URL: http://www.nano.cnr.it
> 
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] epsilon.x

2016-11-23 Thread Andrea Ferretti


Dear Pashangpour,

the idea is very similar to the usual kpt convergence 
vs smearing parameter for a DOS calculation.

At variance with scf runs, here you are computing a spectral quantity 
(the dielectric function as a function of the frequency), meaning that you 
may need a (much) finer mesh of kpts.

In general, the larger the smearing, the lower the resolution of your 
spectrum (in the simplest case you are replacing dirac's deltas with 
gaussians), while the larger the kpt mesh that you use, the smaller the 
smearing parameter can be...

I would follow a recipe like this:
* set the smearing and converge the spectrum wrt kpts
* if the accuracy of the spectrum (ie the resolution of its features) is
   ok with you, exit(),
   otherwise reduce the smearing parameter and iterate
* by reducing the smearing you should expect to converge with a denser
   mesh of kpts  (a rule of thumb could be dk * delta ~ constant, where
   dk is the kpt grid spacing and delta the smearing parameter.. though
   it probably depends on the system and on your requirements)

Andrea


> Dear all
> How can I find suitable value of intersmear and intrasmear in epsioln 
> calculation via epsilon.x?
> Thanks in advance
> M. Pashangpour
> PhD of physics
> IAU,Tehran,Iran
>
> Sent from my iPhone
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>

-- 
Andrea Ferretti, PhD
S3 Center, Istituto Nanoscienze, CNR
via Campi 213/A, 41125, Modena, Italy
Tel: +39 059 2055322;  Skype: andrea_ferretti
URL: http://www.nano.cnr.it

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] problem with DFT+U

2016-11-23 Thread Sergi Vela
Dear Paolo,

Unfortunately, there's not much to report so far. Many "relax" jobs for a
system of ca. 500 atoms (including Fe) fail giving the same message Davide
reported long time ago:
_

Fatal error in PMPI_Bcast: Other MPI error, error stack:
PMPI_Bcast(2434): MPI_Bcast(buf=0x8b25e30, count=7220,
MPI_DOUBLE_PRECISION, root=0, comm=0x8407) failed
MPIR_Bcast_impl(1807)...:
MPIR_Bcast(1835):
I_MPIR_Bcast_intra(2016): Failure during collective
MPIR_Bcast_intra(1665)..: Failure during collective
_

It only occurs in some architectures. The same inputs work for me in 2
other machines, so it seems to be related to the compilation. The support
team of the HPC center I'm working on is trying to identify the problem.
Also, it seems to occur randomly. In the sense that for some DFT+U
calculations of the same type (same cutoffs, pp's, system) there is no
problem at all.

I'll try to be more helpful next time, and I'll keep you updated.

Bests,
Sergi

2016-11-23 15:21 GMT+01:00 Paolo Giannozzi :

> Thank you, but unless an example demonstrating the problem is provided, or
> at least some information on where this message come from is supplied,
> there is close to nothing that can be done
>
> Paolo
>
> On Wed, Nov 23, 2016 at 10:05 AM, Sergi Vela  wrote:
>
>> Dear Colleagues,
>>
>> Just to report that I'm having exactly the same problem with DFT+U. The
>> same message is appearing randomly only when I use the Hubbard term. I
>> could test versions 5.2 and 6.0 and it occurs in both.
>>
>> All my best,
>> Sergi
>>
>> 2015-07-16 18:43 GMT+02:00 Paolo Giannozzi :
>>
>>> There are many well-known problems of DFT+U, but none that is known to
>>> crash jobs with an obscure message.
>>>
>>> Rank 21 [Thu Jul 16 15:51:04 2015] [c4-2c0s15n2] Fatal error in
 PMPI_Bcast: Message truncated, error stack:
 PMPI_Bcast(1615)..: MPI_Bcast(buf=0x75265e0,
 count=160, MPI_DOUBLE_PRECISION, root=0, comm=0xc400) failed

>>>
>>> this signals a mismatch between what is sent and what is received in a
>>> broadcast operation. This may be due to an obvious bug, that however should
>>> show up at the first iteration, not after XX. Apart compiler or MPI library
>>> bugs, another reason is the one described in sec.8.3 of the developer
>>> manual: different processes following a different execution paths. From
>>> time to time, cases like this are found  (the latest occurrence, in band
>>> parallelization of exact exchange) and easily fixed. Unfortunately, finding
>>> them (that is: where this happens) typically requires a painstaking
>>> parallel debugging.
>>>
>>> Paolo
>>> --
>>> Paolo Giannozzi, Dept. Chemistry,
>>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>>> Phone +39-0432-558216, fax +39-0432-558222
>>>
>>> ___
>>> Pw_forum mailing list
>>> Pw_forum@pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>
>>
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>
>
>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] problem with DFT+U

2016-11-23 Thread Paolo Giannozzi
Thank you, but unless an example demonstrating the problem is provided, or
at least some information on where this message come from is supplied,
there is close to nothing that can be done

Paolo

On Wed, Nov 23, 2016 at 10:05 AM, Sergi Vela  wrote:

> Dear Colleagues,
>
> Just to report that I'm having exactly the same problem with DFT+U. The
> same message is appearing randomly only when I use the Hubbard term. I
> could test versions 5.2 and 6.0 and it occurs in both.
>
> All my best,
> Sergi
>
> 2015-07-16 18:43 GMT+02:00 Paolo Giannozzi :
>
>> There are many well-known problems of DFT+U, but none that is known to
>> crash jobs with an obscure message.
>>
>> Rank 21 [Thu Jul 16 15:51:04 2015] [c4-2c0s15n2] Fatal error in
>>> PMPI_Bcast: Message truncated, error stack:
>>> PMPI_Bcast(1615)..: MPI_Bcast(buf=0x75265e0, count=160,
>>> MPI_DOUBLE_PRECISION, root=0, comm=0xc400) failed
>>>
>>
>> this signals a mismatch between what is sent and what is received in a
>> broadcast operation. This may be due to an obvious bug, that however should
>> show up at the first iteration, not after XX. Apart compiler or MPI library
>> bugs, another reason is the one described in sec.8.3 of the developer
>> manual: different processes following a different execution paths. From
>> time to time, cases like this are found  (the latest occurrence, in band
>> parallelization of exact exchange) and easily fixed. Unfortunately, finding
>> them (that is: where this happens) typically requires a painstaking
>> parallel debugging.
>>
>> Paolo
>> --
>> Paolo Giannozzi, Dept. Chemistry,
>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>> Phone +39-0432-558216, fax +39-0432-558222
>>
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>



-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] QE 6.0 slower than 5.4 ??

2016-11-23 Thread nicola varini
Hi Francesco, in order to have performance reproducibility be sure to:
-disable hyperthreading
-disable node health check mechanism
In both cases I experienced a slowdown up to a factor 2.
You also mention that you see slowdown while using threads.
The system you mention looks way to small to experience any significant
benefit from MPI+OpenMP execution.

BR,


Nicola


On 11/23/2016 12:09 PM, Francesco Pelizza wrote:
> Hi Dear community,
>
>
> I have a question...Since the qe 6.0 release i started to use it, and I
> noticed a slow down for systems up to 48 atoms / 100 electrons running
> on few cores, and a speed up running upon more cores.
>
> I other words, taking as example an insulator polymer, set in its
> lattice with 96 electrons:
>
> using qe 5.4 on 8 threads takes 25-35% less time than qe 6.0
>
> that's generally true from scf, to vc-relax to bands and phonon or
> whatever calculations
>
> if I scale on servers or HPC I do not see slow down, and perhaps the qe
> 6.0 is in the average 10-15% faster.
>
>
> Was it expected to be so?
>
> Something changed in the way the system is fragmented across threads?
>
>
> BW
>
> Francesco Pelizza
>
> Strathclyde University
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

-- 
Nicola Varini, PhD

Scientific IT and Application Support (SCITAS)
Theory and simulation of materials (THEOS)
ME B2 464 (Bâtiment ME)
Station 1
CH-1015 Lausanne
+41 21 69 31332
http://scitas.epfl.ch

Nicola Varini

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] QE 6.0 slower than 5.4 ??

2016-11-23 Thread Paolo Giannozzi
I do not think anything has been done that negatively affects speed of
execution. Before taking seriously any claim on increased or decreased
speed due to X, I want to see a "diff" of two runs made with different X
and all other conditions unchanged; plus a few different runs of the same
code yielding the same results and very close timings.

Paolo

On Wed, Nov 23, 2016 at 12:09 PM, Francesco Pelizza <
francesco.peli...@strath.ac.uk> wrote:

> Hi Dear community,
>
>
> I have a question...Since the qe 6.0 release i started to use it, and I
> noticed a slow down for systems up to 48 atoms / 100 electrons running
> on few cores, and a speed up running upon more cores.
>
> I other words, taking as example an insulator polymer, set in its
> lattice with 96 electrons:
>
> using qe 5.4 on 8 threads takes 25-35% less time than qe 6.0
>
> that's generally true from scf, to vc-relax to bands and phonon or
> whatever calculations
>
> if I scale on servers or HPC I do not see slow down, and perhaps the qe
> 6.0 is in the average 10-15% faster.
>
>
> Was it expected to be so?
>
> Something changed in the way the system is fragmented across threads?
>
>
> BW
>
> Francesco Pelizza
>
> Strathclyde University
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>



-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] VC-relax collapsing unit cell

2016-11-23 Thread Paolo Giannozzi
I made a quick test with USPP's. The unit cell shrinks about 10%.

Paolo

On Tue, Nov 22, 2016 at 4:56 PM, John Bilgerman 
wrote:

> Thanks for the tip. I'd only tried up to about a hundred in my tests, I'll
> start looking even higher. I attempted this over the weekend on my (old!)
> desktop, but found I will have to wait a week or two until I get cluster
> access before I can check and see if it was the only issue.
>
> Thanks,
> John
>
> On Sat, Nov 19, 2016 at 2:34 AM, Paolo Giannozzi 
> wrote:
>
>> Not sure this is the (only) problem, but 35 Ry for norm-conserving Iron
>> with semicore states is way too small. 200-300 Ry is a more appropriate
>> cutoff.
>>
>> Paolo
>>
>> On Sat, Nov 19, 2016 at 12:25 AM, John Bilgerman > > wrote:
>>
>>> Hi,
>>>
>>> I've been banging my head against this and cannot find what is likely a
>>> silly mistake despite many tests and lots of reading.
>>>
>>> I'm trying to optimize the (known) structure of NaFePO4 as a test. I'm
>>> starting from the experimental crystal structure, so the drastic collapse
>>> of the unit cell to < 1/2 suggests an issue.
>>>
>>> I know the common problem is inputing the structure wrong, but I've done
>>> my best (and sanity-checked the input/output files with Xcrysden).
>>>
>>> I'm new to QE, any help would be appreciated.
>>>
>>> Input file:
>>>  
>>>  calculation = 'vc-relax' ,
>>> restart_mode = 'from_scratch' ,
>>>   outdir = './' ,
>>>   wfcdir = './scratch' ,
>>>   pseudo_dir = './pseudo' ,
>>>  disk_io = 'default' ,
>>>verbosity = 'high' ,
>>>  /
>>>  
>>>ibrav = 8,
>>>  space_group = 62 ,
>>>A = 9.001 ,
>>>B = 6.874 ,
>>>C = 5.052 ,
>>>cosAB = 0 ,
>>>cosAC = 0 ,
>>>cosBC = 0 ,
>>>  nat = 6,
>>> ntyp = 4,
>>>  ecutwfc = 35 ,
>>>  ecutrho = 140 ,
>>>  occupations = 'smearing' ,
>>>  degauss = 0.02 ,
>>> smearing = 'gaussian' ,
>>>nspin = 2 ,
>>>starting_magnetization(1) = 0.7,
>>>starting_magnetization(2) = 0,
>>>starting_magnetization(3) = 0,
>>>starting_magnetization(4) = 0,
>>> noncolin = .false. ,
>>>  /
>>>  
>>>  diagonalization = 'david' ,
>>>  /
>>>  
>>>  /
>>>  
>>>  /
>>> ATOMIC_SPECIES
>>>Fe   55.0  Fe.pbe-sp-hgh.upf
>>> P   30.0  P.pbe-hgh.upf
>>>Na   22.0  Na.pbe-sp-hgh.upf
>>> O   16.0  O.pbe-hgh.upf
>>> ATOMIC_POSITIONS crystal_sg
>>> Fe 4a
>>> P 4c 0.17585  0.46447
>>> Na 4c  0.34999  0.9702
>>> O 8d 0.1212 0.0682 0.3177
>>> O 8d 0.3486 0.25 0.4561
>>> O 8d 0.1154 0.25 0.7507
>>> K_POINTS automatic
>>>   2 3 4   1 1 1
>>>
>>>
>>> The relevant parts of the CIF file for the structure are:
>>> ...
>>> _cell_length_a 9.001(8)
>>> _cell_length_b 6.874(3)
>>> _cell_length_c 5.052(4)
>>> _cell_angle_alpha 90.
>>> _cell_angle_beta 90.
>>> _cell_angle_gamma 90.
>>> _cell_volume 312.58
>>> _cell_formula_units_Z 4
>>> _symmetry_space_group_name_H-M 'P n m a'
>>> _symmetry_Int_Tables_number 62
>>> ...
>>> Fe1 Fe2+ 4 a 0 0 0 . 1. 0
>>> P1 P5+ 4 c 0.17585(4) 0.25 0.46447(8) . 1. 0
>>> Na1 Na1+ 4 c 0.34999(9) 0.25 0.9702(2) . 1. 0
>>> O1 O2- 8 d 0.1212(1) 0.0682(1) 0.3177(2) . 1. 0
>>> O2 O2- 4 c 0.3486(1) 0.25 0.4561(2) . 1. 0
>>> O3 O2- 4 c 0.1154(1) 0.25 0.7507(2) . 1. 0
>>> ...
>>>
>>> John
>>>
>>> ___
>>> Pw_forum mailing list
>>> Pw_forum@pwscf.org
>>> http://pwscf.org/mailman/listinfo/pw_forum
>>>
>>
>>
>>
>> --
>> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
>> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
>> Phone +39-0432-558216, fax +39-0432-558222
>>
>>
>> ___
>> Pw_forum mailing list
>> Pw_forum@pwscf.org
>> http://pwscf.org/mailman/listinfo/pw_forum
>>
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>



-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] QE 6.0 slower than 5.4 ??

2016-11-23 Thread Francesco Pelizza
Hi Dear community,


I have a question...Since the qe 6.0 release i started to use it, and I 
noticed a slow down for systems up to 48 atoms / 100 electrons running 
on few cores, and a speed up running upon more cores.

I other words, taking as example an insulator polymer, set in its 
lattice with 96 electrons:

using qe 5.4 on 8 threads takes 25-35% less time than qe 6.0

that's generally true from scf, to vc-relax to bands and phonon or 
whatever calculations

if I scale on servers or HPC I do not see slow down, and perhaps the qe 
6.0 is in the average 10-15% faster.


Was it expected to be so?

Something changed in the way the system is fragmented across threads?


BW

Francesco Pelizza

Strathclyde University

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Running in Parallel

2016-11-23 Thread Filippo SPIGA
On Nov 22, 2016, at 11:48 PM, Mofrad, Amir Mehdi (MU-Student) 
 wrote:
> After I compiled version 6 I can't run it in parallel.

A bit more information about how you compiled and how your run will be useful 
to understand your problem

--
Filippo SPIGA ~ Quantum ESPRESSO Foundation ~ http://www.quantum-espresso.org


___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] problem with DFT+U

2016-11-23 Thread Sergi Vela
Dear Colleagues,

Just to report that I'm having exactly the same problem with DFT+U. The
same message is appearing randomly only when I use the Hubbard term. I
could test versions 5.2 and 6.0 and it occurs in both.

All my best,
Sergi

2015-07-16 18:43 GMT+02:00 Paolo Giannozzi :

> There are many well-known problems of DFT+U, but none that is known to
> crash jobs with an obscure message.
>
> Rank 21 [Thu Jul 16 15:51:04 2015] [c4-2c0s15n2] Fatal error in
>> PMPI_Bcast: Message truncated, error stack:
>> PMPI_Bcast(1615)..: MPI_Bcast(buf=0x75265e0, count=160,
>> MPI_DOUBLE_PRECISION, root=0, comm=0xc400) failed
>>
>
> this signals a mismatch between what is sent and what is received in a
> broadcast operation. This may be due to an obvious bug, that however should
> show up at the first iteration, not after XX. Apart compiler or MPI library
> bugs, another reason is the one described in sec.8.3 of the developer
> manual: different processes following a different execution paths. From
> time to time, cases like this are found  (the latest occurrence, in band
> parallelization of exact exchange) and easily fixed. Unfortunately, finding
> them (that is: where this happens) typically requires a painstaking
> parallel debugging.
>
> Paolo
> --
> Paolo Giannozzi, Dept. Chemistry,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum