Dear Patrick,

Thanks for the quick answer.


Sorry for the broken link, i send anyway the code as an tarball in

my previous mail


I am myself using fortran together with Lustre,  using a different application

IO than pure fortran i.e MPI-IO or HDF5 and it works just fine.


I never used the pure fortran IO with lustre and was surprised by the

low performance on our filesystem that users reported.


In the tarball i adapted the pure fortran code to use a different application 
IO (HDF5).

In this case i see no preformance problem.


Could it be that i both case, there are data contention problem and/or temporal 
network

issue but the HDF5 IO is just more resilient  than fortran IO to these issue.

Anyway how to check properly for these Lustre system problem ( data 
contention/network)?


Regards,

Denis



________________________________
From: Patrick Farrell <[email protected]>
Sent: Thursday, February 3, 2022 4:15:16 PM
To: Bertini, Denis Dr.; [email protected]
Subject: Re: RE-Fortran IO problem

Denis,

FYI, the git link you provided seems to be non-public - it asks for a GSI login.

Fortran is widely used for applications on Lustre, so it's unlikely to be a 
fortran specific issue.  If you're seeing I/O rates drop suddenly during​ 
activity, rather than being reliably low for some particular operation, I would 
look to the broader Lustre system.  It may be suddenly extremely busy or there 
could be, eg, a temporary network issue - Assuming this is a system belonging 
to your institution, I'd check with your admins.

Regards,
Patrick
________________________________
From: lustre-discuss <[email protected]> on behalf of 
Bertini, Denis Dr. <[email protected]>
Sent: Thursday, February 3, 2022 6:43 AM
To: [email protected] <[email protected]>
Subject: [lustre-discuss] RE-Fortran IO problem


Hi,


Just as an add-on to my previous mail, the problem shows up also

with intel fortran  and it not specific to gnu fortran compiler.

So it seems to be linked to how the fortran IO is handled which

seems to be sub-optimal in cas of a Lustre filesystem.


I would be grateful if one can confirm/disconfirm  that.


Here again the access to the code i used for my benchmarks:


https://git.gsi.de/hpc/cluster/ci_ompi/-/tree/main/f/src


Best,

Denis


---------
Denis Bertini
Abteilung: CIT
Ort: SB3 2.265a

Tel: +49 6159 71 2240
Fax: +49 6159 71 2986
E-Mail: [email protected]

GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the GSI Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
Ministerialdirigent Dr. Volkmar Dietz
_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to