Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN: effect of ignorePrefetchLUNCount

2020-06-16 Thread Jan-Frode Myklebust
tir. 16. jun. 2020 kl. 15:32 skrev Giovanni Bracco : > > > I would correct MaxMBpS -- put it at something reasonable, enable > > verbsRdmaSend=yes and > > ignorePrefetchLUNCount=yes. > > Now we have set: > verbsRdmaSend yes > ignorePrefetchLUNCount yes > maxMBpS 8000 > > but the only parameter

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN: effect of ignorePrefetchLUNCount

2020-06-16 Thread Giovanni Bracco
On 11/06/20 12:13, Jan-Frode Myklebust wrote: On Thu, Jun 11, 2020 at 9:53 AM Giovanni Bracco > wrote: > > You could potentially still do SRP from QDR nodes, and via NSD for your > omnipath nodes. Going via NSD seems like a bit pointless

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-12 Thread Aaron Knister
fsug-discuss-boun...@spectrumscale.org > To: gpfsug main discussion list > Cc: gpfsug-discuss-boun...@spectrumscale.org, Agostino Funel > > Subject: [EXTERNAL] Re: [gpfsug-discuss] very low read performance in simple > spectrum scale/gpfs cluster with a storage-server SAN > D

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Luis Bolinches
Hi   the block for writes increases the IOPS on those cards that might be already at the limit so I would not discard taht lowering the IOPS for writes has a positive effect on reads or not but it is a smoking gun that needs to be addressed. My experience of ignoring those is not a positive one.  

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Uwe Falke
cuss@spectrumscale.org Cc: Agostino Funel Date: 05/06/2020 14:22 Subject:[EXTERNAL] [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN Sent by:gpfsug-discuss-boun...@spectrumscale.org In our lab we have received two storage-ser

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Uwe Falke
quot;If you always give you will always have" -- Anonymous* >> >> - Original message - >> From: Giovanni Bracco >> Sent by: gpfsug-discuss-boun...@spectrumscale.org >> To: Jan-Frode Myklebust , gpfsug main discussion >> list >> Cc: Agostino

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Giovanni Bracco
acco Sent by: gpfsug-discuss-boun...@spectrumscale.org To: Jan-Frode Myklebust , gpfsug main discussion list Cc: Agostino Funel Subject: [EXTERNAL] Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN Date: Thu, J

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Jan-Frode Myklebust
On Thu, Jun 11, 2020 at 9:53 AM Giovanni Bracco wrote: > > > > > You could potentially still do SRP from QDR nodes, and via NSD for your > > omnipath nodes. Going via NSD seems like a bit pointless indirection. > > not really: both clusters, the 400 OPA nodes and the 300 QDR nodes share > the

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Jonathan Buzzard
On 11/06/2020 08:53, Giovanni Bracco wrote: [SNIP] not really: both clusters, the 400 OPA nodes and the 300 QDR nodes share the same data lake in Spectrum Scale/GPFS so the NSD servers support the flexibility of the setup. NSD servers make use of a IB SAN fabric (Mellanox FDR switch) where

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Luis Bolinches
On that RAID 6 what is the logical RAID block size? 128K, 256K, other? --Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / SalutacionsLuis Bolinches Consultant IT Specialist IBM Spectrum Scale development ESS & client adoption teams Mobile Phone: +358503112585  

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Giovanni Bracco
Comments and updates in the text: On 05/06/20 19:02, Jan-Frode Myklebust wrote: fre. 5. jun. 2020 kl. 15:53 skrev Giovanni Bracco mailto:giovanni.bra...@enea.it>>: answer in the text On 05/06/20 14:58, Jan-Frode Myklebust wrote: > > Could maybe be interesting to drop the

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-05 Thread Jan-Frode Myklebust
fre. 5. jun. 2020 kl. 15:53 skrev Giovanni Bracco : > answer in the text > > On 05/06/20 14:58, Jan-Frode Myklebust wrote: > > > > Could maybe be interesting to drop the NSD servers, and let all nodes > > access the storage via srp ? > > no we can not: the production clusters fabric is a mix of a

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-05 Thread Giovanni Bracco
answer in the text On 05/06/20 14:58, Jan-Frode Myklebust wrote: Could maybe be interesting to drop the NSD servers, and let all nodes access the storage via srp ? no we can not: the production clusters fabric is a mix of a QDR based cluster and a OPA based cluster and NSD nodes provide

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-05 Thread Jan-Frode Myklebust
Could maybe be interesting to drop the NSD servers, and let all nodes access the storage via srp ? Maybe turn off readahead, since it can cause performance degradation when GPFS reads 1 MB blocks scattered on the NSDs, so that read-ahead always reads too much. This might be the cause of the slow

[gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-05 Thread Giovanni Bracco
In our lab we have received two storage-servers, Super micro SSG-6049P-E1CR24L, 24 HD each (9TB SAS3), with Avago 3108 RAID controller (2 GB cache) and before putting them in production for other purposes we have setup a small GPFS test cluster to verify if they can be used as storage (our