Hi
On top of what has been mentioned here (RAID <-> BS aligment, and many other things) I would suggest to look at the last 4/5 slides of a 2018 (disclaimer: my own) London UG presentation http://files.gpfsug.org/presentations/2018/London/14_LuisBolinches_GPFSUG.pdf
It gives a start on
On Fri, 05 Jun 2020 14:24:27 -, "Saula, Oluwasijibomi" said:
> But with the RAID 6 writing costs Vladis explained, it now makes sense why
> the write IO was badly affected...
> Action [1,2,3,4,A] : The only valid responses are characters from this set:
> [1, 2, 3, 4, A]
> Action
(Valdis Kl=?utf-8?Q?=c4=93?=tnieks)
--
Message: 1
Date: Thu, 04 Jun 2020 21:17:08 -0400
From: "Valdis Kl=?utf-8?Q?=c4=93?=tnieks"
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Client Latenc
On Thu, 04 Jun 2020 15:33:18 -, "Saula, Oluwasijibomi" said:
> However, I still can't understand why write IO operations are 5x more latent
> than ready operations to the same class of disks.
Two things that may be biting you:
First, on a RAID 5 or 6 LUN, most of the time you only need to
(Stephen Ulmer)
--
Message: 1
Date: Wed, 3 Jun 2020 22:19:49 -0400
From: Stephen Ulmer
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Client Latency and High NSD Server Load
Average
Message-ID:
Content-Type: text/plain; chars
c14173/attachment-0001.html
>
--
Message: 3
Date: Wed, 3 Jun 2020 21:56:04 +0000
From: "Frederick Stock"
To: gpfsug-discuss@spectrumscale.org
Cc: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Client Latency and High NSD Server Loa
From the waiters you provided I would guess there is something amiss with some of your storage systems. Since those waiters are on NSD servers they are waiting for IO requests to the kernel to complete. Generally IOs are expected to complete in milliseconds, not seconds. You could look at the
any circumstance?
>
> We are seeing client latency between 6 and 9 seconds and are wondering if
> some GPFS configuration or NSD server condition may be triggering this poor
> performance.
>
>
>
> Thanks,
>
>
> Oluwasijibomi (Siji) Saula
>
> HPC Syste
I saw something EXACTLY like this way back in the 3.x days when I had a backend
storage unit that had a flaky main memory issue and some enclosures were
constantly flapping between controllers for ownership. Some NSDs were
affected, some were not. I can imagine this could still happen in 4.x
Hello, Oluwasijibomi ,
I suppose you are not running ESS (might be wrong on this).
I'd check the IO history on the NSD servers (high IO times?) and in
addition the IO traffic at the block device level , e.g. with iostat or
the like (still high IO times there? Are the IO sizes ok or too low
Are you running ESS?
> On Jun 3, 2020, at 2:56 PM, Frederick Stock wrote:
>
> Does the output of mmdf show that data is evenly distributed across your
> NSDs? If not that could be contributing to your problem. Also, are your
> NSDs evenly distributed across your NSD servers, and the NSD
erick Stock"
To: gpfsug-discuss@spectrumscale.org
Cc: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Client Latency and High NSD Server Load
Average
Message-ID:
Content-Type: text/plain; charset="us-ascii"
An HTML attachme
Does the output of mmdf show that data is evenly distributed across your NSDs? If not that could be contributing to your problem. Also, are your NSDs evenly distributed across your NSD servers, and the NSD configured so the first NSD server for each is not the same one?
13 matches
Mail list logo