Dear Nick,
many thanks for your kind reply.
Everything worked following your suggestion (MKL)

Have a nice weekend
Giacomo

Il giorno gio 5 mar 2020 alle ore 22:01 Nick Papior <[email protected]>
ha scritto:

> Sorry for the delay.
>
> I have now runned your big system, and I am not able to reproduce the
> problem.
> I would suggest you try with other linear algebra packages.
> The standard shipped BLAS+LAPACK packages are also *extremely* slow.
>
> Den ons. 4. mar. 2020 kl. 22.12 skrev Giacomo Giorgi <[email protected]
> >:
>
>> Dear all,
>> following a previous unanswered message I would like to ask you your
>> opinion about this weird behavior of my calculations with siesta.
>>
>> I am working with TiO2 anatase.
>> Bulk worked perfectly. Also the (101) surface unit cell worked perfectly.
>>
>> Now I need a (5x3) supercell. (attached the fdf).
>> Running it as a replica of the initial unit cell I get the following
>> error:
>>
>> ....
>> ....
>> cdiag: Error in Cholesky factorisation
>> Stopping Program from Node:    4
>>          160
>> cdiag: Error in Cholesky factorisation
>> Stopping Program from Node:    5
>>          160
>> cdiag: Error in Cholesky factorisation
>> Stopping Program from Node:    0
>> --------------------------------------------------------------------------
>> MPI_ABORT was invoked on rank 6 in communicator MPI COMMUNICATOR 3 CREATE
>> FROM 0
>> with errorcode 1.
>>
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> --------------------------------------------------------------------------
>> [node03:300332] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at
>> line 2079
>> [node03:300332] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at
>> line 2079
>> [node03:300332] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at
>> line 2079
>> [node03:300332] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at
>> line 2079
>> [node03:300332] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at
>> line 2079
>> [node03:300332] 23 more processes have sent help message help-mpi-api.txt
>> / mpi-abort
>> [node03:300332] Set MCA parameter "orte_base_help_aggregate" to 0 to see
>> all help / error messages
>>
>>
>> Maybe the structure is too large (strange...), thus I decided to run a
>> calculation on a smaller (1x3) supercell (still fdf atteched along with psf
>> used).
>> Then this time I get this new error:
>> ...
>> ...
>> Stopping Program from Node:   19
>> Bad DM normalization: Qtot, Tr[D*S] =        576.00000000
>>  638080922.01711190
>> Stopping Program from Node:   21
>> Bad DM normalization: Qtot, Tr[D*S] =        576.00000000
>>  638080922.01711190
>> Stopping Program from Node:   22
>> --------------------------------------------------------------------------
>> MPI_ABORT was invoked on rank 15 in communicator MPI COMMUNICATOR 3
>> CREATE FROM 0
>> with errorcode 1.
>>
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> --------------------------------------------------------------------------
>> [node03:306841] 23 more processes have sent help message help-mpi-api.txt
>> / mpi-abort
>> [node03:306841] Set MCA parameter "orte_base_help_aggregate" to 0 to see
>> all help / error messages
>>
>> Since I do not find anyone able to fix this issue, which actually seems
>> to be system-independent ( I have had similar behaviour with other slabs),
>> I wonder if there is any of you able to help me.
>>
>> Thanks a lot-
>> Regards,
>> Giacomo
>>
>>
>> Siesta Version  : v4.1-b4
>> Architecture    : unknown
>> Compiler version: GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
>> Compiler flags  : mpif90 -O2 -fPIC -ftree-vectorize
>> PP flags        : -DFC_HAVE_ABORT -DMPI -DSIESTA__DIAG_2STAGE
>> Libraries       : libsiestaLAPACK.a libsiestaBLAS.a
>> /opt/share/scalapack_gnu/libscalapack.a
>> PARALLEL version
>>
>> * Running on 24 nodes in parallel
>> >> Start of run:   3-MAR-2020  10:41:40
>>
>>
>>
>>
>> --
>> G
>>
>>
>> "Oltre le illusioni di Timbuctù e le gambe lunghe di Babalù c'era questa
>> strada...Questa strada zitta che vola via come una farfalla, una nostalgia,
>> nostalgia al gusto di curaçao...Forse un giorno meglio mi spiegherò"
>>
>> (Paolo Conte, "Hemingway")
>>
>> --
>> SIESTA is supported by the Spanish Research Agency (AEI) and by the
>> European H2020 MaX Centre of Excellence (http://www.max-centre.eu/)
>>
>
>
> --
> Kind regards Nick
>
> --
> SIESTA is supported by the Spanish Research Agency (AEI) and by the
> European H2020 MaX Centre of Excellence (http://www.max-centre.eu/)
>


-- 
G


"Oltre le illusioni di Timbuctù e le gambe lunghe di Babalù c'era questa
strada...Questa strada zitta che vola via come una farfalla, una nostalgia,
nostalgia al gusto di curaçao...Forse un giorno meglio mi spiegherò"

(Paolo Conte, "Hemingway")
-- 
SIESTA is supported by the Spanish Research Agency (AEI) and by the European 
H2020 MaX Centre of Excellence (http://www.max-centre.eu/)

Responder a