You’re missing a few standard configs that might be relevant.. I would suggest:
workerThreads=512 (or 1024) ignoreprefetchluncount=yes numamemoryinterleave=yes (and make sure numactl is installed) ignoreprefetchluncount is important to tell the system that you have multiple HDDs backing each LUN, otherwise it thinks there a single spindle, and won’t schedule much readahead/writebehind. -jf tor. 8. feb. 2024 kl. 16:00 skrev Michal Hruška <[email protected] >: > @Aaron > > Yes, I can confirm that 2MB blocks are transfered over. > > @ Jan-Frode > > We tried to change multiple parameters, but if you know the best > combination for sequential IO, please let me know. > > > > #mmlsconfig > > autoload no > > dmapiFileHandleSize 32 > > minReleaseLevel 5.1.9.0 > > tscCmdAllowRemoteConnections no > > ccrEnabled yes > > cipherList AUTHONLY > > sdrNotifyAuthEnabled yes > > pagepool 64G > > maxblocksize 16384K > > maxMBpS 40000 > > maxReceiverThreads 32 > > nsdMaxWorkerThreads 512 > > nsdMinWorkerThreads 8 > > nsdMultiQueue 256 > > nsdSmallThreadRatio 0 > > nsdThreadsPerQueue 3 > > prefetchAggressiveness 2 > > adminMode central > > > > /dev/fs0 > > @Uwe > > Using iohist we found out that gpfs is overloading one dm-device (it took > about 500ms to finish IOs). We replaced the „problematic“ dm-device (as we > have enough drives to play with) for new one but the overloading issue just > jumped to another dm-device. > We believe that this behaviour is caused by the gpfs but we are unable to > locate the root cause of it. > > > > Best, > Michal > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
