Re: [QE-users] Problems with DFT + U Calculations: Fatal error in PMPI_Bcast

2018-11-10 Thread arini kar
Dear Sir,

I thank you for sending me the patch file. I tried using the file in my
cluster however, when I give the command 'make', I receive the following
message:
Makefile:9: make.inc: No such file or directory
make: *** No rule to make target `make.inc'.  Stop.

I request you let me know the possible correction. I also wanted to know
how can I revert back the patch.

Regards
Arini Kar
M.Sc.-Ph.D.
IIT Bombay
India
___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Problems with DFT + U Calculations: Fatal error in PMPI_Bcast

2018-10-31 Thread Paolo Giannozzi
I have reproduced a different one, though. It's the usual problem: small
differences between different processors, building up for no good reason.
It might be related to your problem as well. Solution soon (I hope)

Paolo

On Thu, Oct 25, 2018 at 10:13 AM Paolo Giannozzi 
wrote:

> From the FAQ: "Mysterious, unpredictable, erratic errors in parallel
> execution are almost always coming from bugs in the compiler or/and in the
> MPI libraries and sometimes even from flaky hardware."
> If the error is not reproducible on a different machine, the above
> statement applies. I couldn't reproduce any errors - apart from lack of
> convergence - but your job takes too much time for extensive testing
>
> Paolo
>
> On Wed, Oct 24, 2018 at 2:56 PM arini kar  wrote:
>
>> Dear Sir,
>>
>> The error arises in the middle during any scf iteration. In some of the
>> calculations, the error arises during the last scf calculation while in
>> some other it arises in the middle of the run.
>>
>> Is it due to some issues with parallelization? If so then what would be
>> the appropriate parallelization? I am currently using the default
>> parallelization.
>>
>> Regards
>> Arini Kar
>> M.Sc.-Ph.D.
>> IIT Bombay
>> India
>> ___
>> users mailing list
>> users@lists.quantum-espresso.org
>> https://lists.quantum-espresso.org/mailman/listinfo/users
>
>
>
> --
> Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
>

-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Problems with DFT + U Calculations: Fatal error in PMPI_Bcast

2018-10-25 Thread Paolo Giannozzi
>From the FAQ: "Mysterious, unpredictable, erratic errors in parallel
execution are almost always coming from bugs in the compiler or/and in the
MPI libraries and sometimes even from flaky hardware."
If the error is not reproducible on a different machine, the above
statement applies. I couldn't reproduce any errors - apart from lack of
convergence - but your job takes too much time for extensive testing

Paolo

On Wed, Oct 24, 2018 at 2:56 PM arini kar  wrote:

> Dear Sir,
>
> The error arises in the middle during any scf iteration. In some of the
> calculations, the error arises during the last scf calculation while in
> some other it arises in the middle of the run.
>
> Is it due to some issues with parallelization? If so then what would be
> the appropriate parallelization? I am currently using the default
> parallelization.
>
> Regards
> Arini Kar
> M.Sc.-Ph.D.
> IIT Bombay
> India
> ___
> users mailing list
> users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users



-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Problems with DFT + U Calculations: Fatal error in PMPI_Bcast

2018-10-24 Thread Dr. K. C. Bhamu
Dear Prof. Paolo,

I am also having same error on same cluster which Mr. A. Kar using with
QE_6.3 and "composer_xe_2015".
I am invoking the command "mpirun -np 32 $QE/pw.x -npool 2 *in ...
*out  for vc-relax.
For me it happen in the last scf, after a message "End final coordinates".
It went okay till scf cycle 53.
It happens in some cases only.

Below seems strand for me: (grep scf *out)

A final scf calculation at the relaxed structure.
 estimated scf accuracy<   0.35117344 Ry
 estimated scf accuracy<  32.50696577 Ry
 estimated scf accuracy<   0.15737732 Ry
 estimated scf accuracy<   0.08941433 Ry
 estimated scf accuracy<   0.04130004 Ry
 estimated scf accuracy<   0.03367160 Ry
 estimated scf accuracy<   0.01518230 Ry
 estimated scf accuracy<   0.01241863 Ry
 estimated scf accuracy<   0.00237884 Ry
 estimated scf accuracy<   0.02059544 Ry
 estimated scf accuracy<   0.01326459 Ry
 estimated scf accuracy<   0.02823102 Ry
 estimated scf accuracy<   0.14757734 Ry
 estimated scf accuracy<   0.05952341 Ry
 estimated scf accuracy<   0.05929330 Ry
 estimated scf accuracy<   0.00738492 Ry
 estimated scf accuracy<   0.00555608 Ry
 estimated scf accuracy<   0.00535798 Ry
 estimated scf accuracy<   0.00688829 Ry
 estimated scf accuracy<   0.00169491 Ry
 estimated scf accuracy<   0.00067948 Ry
 estimated scf accuracy<   0.06297539 Ry
 estimated scf accuracy<   0.05981938 Ry
 estimated scf accuracy<   0.05814955 Ry
 estimated scf accuracy<   0.06202781 Ry
 estimated scf accuracy<   0.06178400 Ry
 estimated scf accuracy<   0.05837077 Ry
 estimated scf accuracy<   0.07705482 Ry
 estimated scf accuracy<   0.03734422 Ry
 estimated scf accuracy<   0.01355381 Ry
 estimated scf accuracy<   0.02777065 Ry
 estimated scf accuracy<   0.02632385 Ry
 estimated scf accuracy<   0.02607268 Ry
 estimated scf accuracy<   0.01001318 Ry
 estimated scf accuracy<   0.00563499 Ry
 estimated scf accuracy<   0.5798 Ry
 estimated scf accuracy<   0.00066655 Ry
 estimated scf accuracy<   0.0103 Ry
 estimated scf accuracy<   0.0082 Ry
 estimated scf accuracy<   0.0002 Ry
 estimated scf accuracy<   0.0001 Ry
 estimated scf accuracy<  6.0E-11 Ry
 estimated scf accuracy<   0.7832 Ry
 estimated scf accuracy<   0.7830 Ry
 estimated scf accuracy<   0.7712 Ry
 estimated scf accuracy<   0.7558 Ry
 estimated scf accuracy<   0.7271 Ry
 estimated scf accuracy<   0.6239 Ry
 estimated scf accuracy<   0.4313 Ry
 estimated scf accuracy<   0.2601 Ry


I still have double on my PPs. If this is due to something else, please let
me know what additional information I can supply to reproduce the error
message.

Kind regards
Bhamu

CSIR-NCL, Pune
India

On Mon, Oct 22, 2018 at 11:19 AM Paolo Giannozzi 
wrote:

> QE version?
>
> On Sat, Oct 20, 2018 at 9:01 AM arini kar  wrote:
>
>> Dear Quantum Espresso users,
>>
>> I have been trying to relax a 2x2x1 supercell of hematite doped with Ge
>> and an oxygen vacancy. However, after a few electronic iterations, I
>> received the following error:
>>
>> Fatal error in PMPI_Bcast: Other MPI error, error stack:
>> PMPI_Bcast(2112): MPI_Bcast(buf=0x11806c00, count=7500,
>> MPI_DOUBLE_PRECISION, root=0, comm=0x8405) failed
>> MPIR_Bcast_impl(1670)...:
>> I_MPIR_Bcast_intra(1887): Failure during collective
>> MPIR_Bcast_intra(1524)..: Failure during collective
>> Fatal error in PMPI_Bcast: Other MPI error, error stack:
>> PMPI_Bcast(2112): MPI_Bcast(buf=0x56fced0, count=7500,
>> MPI_DOUBLE_PRECISION, root=0, comm=0x8405) failed
>> MPIR_Bcast_impl(1670)...:
>> I_MPIR_Bcast_intra(1887): Failure during collective
>> MPIR_Bcast_intra(1524)..: Failure during collective
>> Fatal error in PMPI_Bcast: Other MPI error, error stack:
>> PMPI_Bcast(2112): MPI_Bcast(buf=0x1080d330, count=7500,
>> MPI_DOUBLE_PRECISION, root=0, comm=0x8405) failed
>> MPIR_Bcast_impl(1670)...:
>> I_MPIR_Bcast_intra(1887): Failure during collective
>> MPIR_Bcast_intra(1524)..: Failure during collective
>> Fatal error in PMPI_Bcast: Other MPI error, error stack:
>> PMPI_Bcast(2112): MPI_Bcast(buf=0x10c025c0, count=7500,
>> MPI_DOUBLE_PRECISION, root=0, comm=0x8405) failed
>> MPIR_Bcast_impl(1670)...:
>> I_MPIR_Bcast_intra(1887): Failure during collective
>> MPIR_Bcast_intra(1524)..: Failure during collective
>> Fatal error in PMPI_Bcast: Other MPI error, error stack:

Re: [QE-users] Problems with DFT + U Calculations: Fatal error in PMPI_Bcast

2018-10-24 Thread arini kar
Dear Sir,

The error arises in the middle during any scf iteration. In some of the
calculations, the error arises during the last scf calculation while in
some other it arises in the middle of the run.

Is it due to some issues with parallelization? If so then what would be the
appropriate parallelization? I am currently using the default
parallelization.

Regards
Arini Kar
M.Sc.-Ph.D.
IIT Bombay
India
___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Problems with DFT + U Calculations: Fatal error in PMPI_Bcast

2018-10-22 Thread Paolo Giannozzi
Does it happen in the last step, after a message like 'A final scf
calculation at the relaxed structure' or 'lsda relaxation :  a final
configuration with zero'? or in the middle of the run? in the latter case,
it is impossible to say anything unless the error is reproducible

Paolo

On Mon, Oct 22, 2018 at 12:13 PM arini kar  wrote:

> Dear Sir,
>
> I am using QE v6.3Max for the simulations.
>
> Regards
> Arini Kar
> M.Sc.-Ph.D.
> IIT Bombay
> India
> ___
> users mailing list
> users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users



-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Problems with DFT + U Calculations: Fatal error in PMPI_Bcast

2018-10-22 Thread arini kar
Dear Sir,

I am using QE v6.3Max for the simulations.

Regards
Arini Kar
M.Sc.-Ph.D.
IIT Bombay
India
___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Problems with DFT + U Calculations: Fatal error in PMPI_Bcast

2018-10-21 Thread Paolo Giannozzi
QE version?

On Sat, Oct 20, 2018 at 9:01 AM arini kar  wrote:

> Dear Quantum Espresso users,
>
> I have been trying to relax a 2x2x1 supercell of hematite doped with Ge
> and an oxygen vacancy. However, after a few electronic iterations, I
> received the following error:
>
> Fatal error in PMPI_Bcast: Other MPI error, error stack:
> PMPI_Bcast(2112): MPI_Bcast(buf=0x11806c00, count=7500,
> MPI_DOUBLE_PRECISION, root=0, comm=0x8405) failed
> MPIR_Bcast_impl(1670)...:
> I_MPIR_Bcast_intra(1887): Failure during collective
> MPIR_Bcast_intra(1524)..: Failure during collective
> Fatal error in PMPI_Bcast: Other MPI error, error stack:
> PMPI_Bcast(2112): MPI_Bcast(buf=0x56fced0, count=7500,
> MPI_DOUBLE_PRECISION, root=0, comm=0x8405) failed
> MPIR_Bcast_impl(1670)...:
> I_MPIR_Bcast_intra(1887): Failure during collective
> MPIR_Bcast_intra(1524)..: Failure during collective
> Fatal error in PMPI_Bcast: Other MPI error, error stack:
> PMPI_Bcast(2112): MPI_Bcast(buf=0x1080d330, count=7500,
> MPI_DOUBLE_PRECISION, root=0, comm=0x8405) failed
> MPIR_Bcast_impl(1670)...:
> I_MPIR_Bcast_intra(1887): Failure during collective
> MPIR_Bcast_intra(1524)..: Failure during collective
> Fatal error in PMPI_Bcast: Other MPI error, error stack:
> PMPI_Bcast(2112): MPI_Bcast(buf=0x10c025c0, count=7500,
> MPI_DOUBLE_PRECISION, root=0, comm=0x8405) failed
> MPIR_Bcast_impl(1670)...:
> I_MPIR_Bcast_intra(1887): Failure during collective
> MPIR_Bcast_intra(1524)..: Failure during collective
> Fatal error in PMPI_Bcast: Other MPI error, error stack:
> PMPI_Bcast(2112): MPI_Bcast(buf=0x112e96c0, count=7500,
> MPI_DOUBLE_PRECISION, root=0, comm=0x8405) failed
> MPIR_Bcast_impl(1670)...:
> I_MPIR_Bcast_intra(1887): Failure during collective
> MPIR_Bcast_intra(1524)..: Failure during collective
> [16:ycn213.en.yuva.param] unexpected disconnect completion event from
> [7:ycn217.en.yuva.param]
> Assertion failed in file ../../dapl_conn_rc.c at line 1128: 0
> internal ABORT - process 16
>
> The input file for the same is attached below. Since, I am new to Quantum
> Espresso, I am not able to find a solution to the problem. I request you to
> help me with possible corrections.
>
> Regards
> Arini Kar
> M.Sc.-Ph.D.
> IIT Bombay
> India
>
> ___
> users mailing list
> users@lists.quantum-espresso.org
> https://lists.quantum-espresso.org/mailman/listinfo/users



-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
users mailing list
users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users