Re: [gpfsug-discuss] Client Latency and High NSD Server Load Average

2020-06-05 Thread Valdis Klētnieks
On Fri, 05 Jun 2020 14:24:27 -, "Saula, Oluwasijibomi" said: > But with the RAID 6 writing costs Vladis explained, it now makes sense why > the write IO was badly affected... > Action [1,2,3,4,A] : The only valid responses are characters from this set: > [1, 2, 3, 4, A] > Action

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-05 Thread Jan-Frode Myklebust
fre. 5. jun. 2020 kl. 15:53 skrev Giovanni Bracco : > answer in the text > > On 05/06/20 14:58, Jan-Frode Myklebust wrote: > > > > Could maybe be interesting to drop the NSD servers, and let all nodes > > access the storage via srp ? > > no we can not: the production clusters fabric is a mix of a

Re: [gpfsug-discuss] Client Latency and High NSD Server Load Average

2020-06-05 Thread Saula, Oluwasijibomi
Vladis/Kums/Fred/Kevin/Stephen, Thanks so much for your insights, thoughts, and pointers! - Certainly increased my knowledge and understanding of potential culprits to watch for... So we finally discovered the root issue to this problem: An unattended TSM restore exercise profusely writing to

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-05 Thread Giovanni Bracco
answer in the text On 05/06/20 14:58, Jan-Frode Myklebust wrote: Could maybe be interesting to drop the NSD servers, and let all nodes access the storage via srp ? no we can not: the production clusters fabric is a mix of a QDR based cluster and a OPA based cluster and NSD nodes provide

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-05 Thread Jan-Frode Myklebust
Could maybe be interesting to drop the NSD servers, and let all nodes access the storage via srp ? Maybe turn off readahead, since it can cause performance degradation when GPFS reads 1 MB blocks scattered on the NSDs, so that read-ahead always reads too much. This might be the cause of the slow

[gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-05 Thread Giovanni Bracco
In our lab we have received two storage-servers, Super micro SSG-6049P-E1CR24L, 24 HD each (9TB SAS3), with Avago 3108 RAID controller (2 GB cache) and before putting them in production for other purposes we have setup a small GPFS test cluster to verify if they can be used as storage (our