Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Luis Bolinches
Hi   the block for writes increases the IOPS on those cards that might be already at the limit so I would not discard taht lowering the IOPS for writes has a positive effect on reads or not but it is a smoking gun that needs to be addressed. My experience of ignoring those is not a positive one.  

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Uwe Falke
Hi Giovanni, how do the waiters look on your clients when reading? Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist Global Technology Services / Project Services Delivery / High Performance Computing +49 175 575 2877 Mobile Rathausstr. 7, 09111 Chemnitz, Germany

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Uwe Falke
While that point (block size should be an integer multiple of the RAID stripe width) is a good one, its violation would explain slow writes, but Giovanni talks of slow reads ... Mit freundlichen Grüßen / Kind regards Dr. Uwe Falke IT Specialist Global Technology Services / Project Services

Re: [gpfsug-discuss] [EXTERNAL] mmremotecluster access from SS 5.0.x to 4.2.3-x refuses id_rsa.pub

2020-06-11 Thread Mervini, Joseph A
mmchconfig nistCompliance=off on the newer system should work. Joe Mervini Sandia National Laboratories High Performance Computing 505.844.6770 jame...@sandia.gov On 6/11/20, 9:10 AM, "gpfsug-discuss-boun...@spectrumscale.org on behalf of David Johnson" wrote: I'm trying to

[gpfsug-discuss] mmremotecluster access from SS 5.0.x to 4.2.3-x refuses id_rsa.pub

2020-06-11 Thread David Johnson
I'm trying to access an old GPFS filesystem from a new cluster. It is good up to the point of adding the SSL keys of the old cluster on the new one. I get from mmremotecluster add command: File _id_rsa.pub does not contain a nist sp 800-131a compliance key Is there any way to override

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Giovanni Bracco
256K Giovanni On 11/06/20 10:01, Luis Bolinches wrote: On that RAID 6 what is the logical RAID block size? 128K, 256K, other? -- Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / Salutacions Luis Bolinches Consultant IT Specialist IBM Spectrum Scale development ESS &

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Jan-Frode Myklebust
On Thu, Jun 11, 2020 at 9:53 AM Giovanni Bracco wrote: > > > > > You could potentially still do SRP from QDR nodes, and via NSD for your > > omnipath nodes. Going via NSD seems like a bit pointless indirection. > > not really: both clusters, the 400 OPA nodes and the 300 QDR nodes share > the

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Jonathan Buzzard
On 11/06/2020 08:53, Giovanni Bracco wrote: [SNIP] not really: both clusters, the 400 OPA nodes and the 300 QDR nodes share the same data lake in Spectrum Scale/GPFS so the NSD servers support the flexibility of the setup. NSD servers make use of a IB SAN fabric (Mellanox FDR switch) where

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Luis Bolinches
On that RAID 6 what is the logical RAID block size? 128K, 256K, other? --Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / SalutacionsLuis Bolinches Consultant IT Specialist IBM Spectrum Scale development ESS & client adoption teams Mobile Phone: +358503112585  

Re: [gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

2020-06-11 Thread Giovanni Bracco
Comments and updates in the text: On 05/06/20 19:02, Jan-Frode Myklebust wrote: fre. 5. jun. 2020 kl. 15:53 skrev Giovanni Bracco mailto:giovanni.bra...@enea.it>>: answer in the text On 05/06/20 14:58, Jan-Frode Myklebust wrote: > > Could maybe be interesting to drop the