Hi

Please review:

https://www.ibm.com/docs/en/storage-scale/5.0.3?topic=recommendations-operating-system-configuration-tuning

Regards



Yaron Daniel
94 Em Ha'Moshavot Rd
[cid:[email protected]]
Storage and Cloud Consultant
Petach Tiqva, 49527
Technology Services
IBM Technology Lifecycle Service
Israel


Phone:
+972-3-916-5672

Fax:
+972-3-916-5672


Mobile:
+972-52-8395593


e-mail:
[email protected]<mailto:[email protected]>


Webex:            
https://ibm.webex.com/meet/yard<webex:%20%20%20%20%20%20%20%20%20%20%20%20https://ibm.webex.com/meet/yard>
IBM 
Israel<webex:%20%20%20%20%20%20%20%20%20%20%20%20%20https://ibm.webex.com/meet/yard%0dIBM%20Israel>



From: gpfsug-discuss <[email protected]> On Behalf Of Jan-Frode 
Myklebust
Sent: Tuesday, 23 January 2024 21:30
To: gpfsug main discussion list <[email protected]>
Subject: [EXTERNAL] Re: [gpfsug-discuss] IBM Flashsystem 7300 HDD sequential 
write performance issue

First thing I would check is that the GPFS block size is a multiple of a full 
RAID stripe. It’s been a while since I worked with SVC/FlashSystem performance, 
but this has been my main issue. So, 8+2p with the default 128KB «chunk size» 
would
ZjQcmQRYFpfptBannerStart
This Message Is From an Untrusted Sender
You have not previously corresponded with this sender.
    Report Suspicious  
<https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!12-vrJF3prOyMQWwFOsq4IVI2F5sW_rNmdbw10fUE-83G9qDoqHYRutCSd3L3Y0Et2YfQhxWnsuZhQfbT1dpCZ4bDtks0wi9Fpa3x3wHAwMy-No9_vCZEOUx$>
   ‌
ZjQcmQRYFpfptBannerEnd

First thing I would check is that the GPFS block size is a multiple of a full 
RAID stripe. It’s been a while since I worked with SVC/FlashSystem performance, 
but this has been my main issue. So, 8+2p with the default 128KB «chunk size» 
would work with 1 MB or larger block size.

The other thing was that it’s important to disable prefetching (chsystem 
-cacheprefetch off), as it will always be prefetching the wrong data because of 
how GPFS scatters the blocks.

And.. on linux side there’s some max device transfersize setting that has had 
huge impact on some systems.. But the exact setting escapes me right now..


HTH


  -jf


tir. 23. jan. 2024 kl. 15:05 skrev Petr Plodík 
<[email protected]<mailto:[email protected]>>:
Hi,

we have GPFS cluster with two IBM FlashSystem 7300 systems with HD expansion 
and 80x 12TB HDD each (in DRAID 8+P+Q), 3 GPFS servers connected via 32G FC. We 
are doing performance tuning on sequential writes to HDDs and seeing suboptimal 
performance. After several tests, it turns out, that the bottleneck seems to be 
the single HDD write performance, which is below 40MB/s and one would expect at 
least 100MB/s.

Does anyone have experiences with IBM flashsystem sequential write performance 
tuning or has these arrays in the infrastructure? We would really appreciate 
any help/explanation.

Thank you!

Petr Plodik
M Computers s.r.o.
[email protected]<mailto:[email protected]>



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org<http://gpfsug.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org<http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

Reply via email to