Just thinking: an application should do direct IO for a good reason, and only then. "Forcing DIO" is probably not the right thing to do - rather check why an app does DIO and either change the app's behaviour if reasonable are maybe use a special  pool for it using mirrored SSDs or so.

BTW, the ESS have some nice mechanism to do small IOs (also direct ones I suppose) quickly by buffering them on flash/NVRAM (where the data is considered persistently stored, hence the IO requests are completed quickly).

Uwe


On 12.03.24 11:59, Peter Hruška wrote:
Hello,

The direct writes are problematic on both writes and rewrites. Rewrites alone are another issue we have noticed. Since indirect (direct=0) workloads are fine, it seems that the easiest solution could be to force indirect IO operations for all workloads. However we didn't find such possibility.

--
S přáním pěkného dne / Best regards

*Mgr. Peter Hruška*
IT specialista

*M Computers s.r.o.*
Úlehlova 3100/10, 628 00 Brno-Líšeň (mapa <https://mapy.cz/s/gafufehufe>)
T:+420 515 538 136
E: [email protected] <mailto:[email protected]>

www.mcomputers.cz <http://www.mcomputers.cz/>
www.lenovoshop.cz <http://www.lenovoshop.cz/>



On Tue, 2024-03-12 at 09:59 +0100, Zdenek Salvet wrote:
EXTERNÍ ODESÍLATEL


On Mon, Mar 11, 2024 at 01:21:32PM +0000, Peter Hruška wrote:
We encountered a problem with performance of writes on GPFS when the application uses direct io access. To simulate the issue it is enough to run fio with option direct=1. The performance drop is quite dramatic - 250 MiB/s vs. 2955 MiB/s. We've tried to instruct GPFS to ignore direct IO by using "disableDIO=yes". The directive didn't have any effect. Is there any possibility how to achieve that GPFS would ignore direct IO requests and use caching for everything?

Hello,
did you use pre-allocated file(s) (was it re-write) ?
libaio traffic is not really asynchronous with respect to necessary metadata operations (allocating new space and writing allocation structures to disk)
in most Linux filesystems and I guess this case is not heavily optimized
in GPFS either (dioSmallSeqWriteBatching feature may help a little but
it targets different scenario I think).

Best regards,
Zdenek Salvet [email protected]
Institute of Computer Science of Masaryk University, Brno, Czech Republic
and CESNET, z.s.p.o., Prague, Czech Republic
Phone: ++420-549 49 6534                           Fax: ++420-541 212 747
----------------------------------------------------------------------------
      Teamwork is essential -- it allows you to blame someone else.


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

--
Karlsruhe Institute of Technology (KIT)
Scientific Computing Centre (SCC)
Scientific Data Management (SDM)

Uwe Falke

Hermann-von-Helmholtz-Platz 1, Building 442, Room 187
D-76344 Eggenstein-Leopoldshafen

Tel: +49 721 608 28024
Email:[email protected]
www.scc.kit.edu

Registered office:
Kaiserstraße 12, 76131 Karlsruhe, Germany

KIT – The Research University in the Helmholtz Association

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

Reply via email to