This won't affect your current issue but if you're doing a lot of large sequential IO you may want to consider setting prefetchPct to 40 to 60 percent instead of default 20%. In our environment that has measurable impact, but we have a lot less ram than you do in the pagepool (8g versus 64g).
Also do you have a dedicated meta pool? If not that could be a source of contention. Highly recommend a small pinnable LUN or two as a dedicated meta pool. Alec On Thu, Feb 8, 2024, 7:01 AM Michal Hruška <[email protected]> wrote: > @Aaron > > Yes, I can confirm that 2MB blocks are transfered over. > > @ Jan-Frode > > We tried to change multiple parameters, but if you know the best > combination for sequential IO, please let me know. > > > > #mmlsconfig > > autoload no > > dmapiFileHandleSize 32 > > minReleaseLevel 5.1.9.0 > > tscCmdAllowRemoteConnections no > > ccrEnabled yes > > cipherList AUTHONLY > > sdrNotifyAuthEnabled yes > > pagepool 64G > > maxblocksize 16384K > > maxMBpS 40000 > > maxReceiverThreads 32 > > nsdMaxWorkerThreads 512 > > nsdMinWorkerThreads 8 > > nsdMultiQueue 256 > > nsdSmallThreadRatio 0 > > nsdThreadsPerQueue 3 > > prefetchAggressiveness 2 > > adminMode central > > > > /dev/fs0 > > @Uwe > > Using iohist we found out that gpfs is overloading one dm-device (it took > about 500ms to finish IOs). We replaced the „problematic“ dm-device (as we > have enough drives to play with) for new one but the overloading issue just > jumped to another dm-device. > We believe that this behaviour is caused by the gpfs but we are unable to > locate the root cause of it. > > > > Best, > Michal > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
