Vladis/Kums/Fred/Kevin/Stephen,

Thanks so much for your insights, thoughts, and pointers! - Certainly increased 
my knowledge and understanding of potential culprits to watch for...

So we finally discovered the root issue to this problem: An unattended TSM 
restore exercise profusely writing to a single file, over and over again into 
the GBs!!..I'm opening up a ticket with TSM support to learn how to mitigate 
this in the future.

But with the RAID 6 writing costs Vladis explained, it now makes sense why the 
write IO was badly affected...

Excerpt from output file:


--- User Action is Required ---

File '/gpfs1/X/Y/Z/fileABC' is write protected


Select an appropriate action

  1. Force an overwrite for this object

  2. Force an overwrite on all objects that are write protected

  3. Skip this object

  4. Skip all objects that are write protected

  A. Abort this operation

Action [1,2,3,4,A] : The only valid responses are characters from this set: [1, 
2, 3, 4, A]

Action [1,2,3,4,A] : The only valid responses are characters from this set: [1, 
2, 3, 4, A]
Action [1,2,3,4,A] : The only valid responses are characters from this set: [1, 
2, 3, 4, A]
Action [1,2,3,4,A] : The only valid responses are characters from this set: [1, 
2, 3, 4, A]
...


Thanks,


Oluwasijibomi (Siji) Saula

HPC Systems Administrator  /  Information Technology



Research 2 Building 220B / Fargo ND 58108-6050

p: 701.231.7749 / www.ndsu.edu<http://www.ndsu.edu/>



[cid:[email protected]]



________________________________
From: [email protected] 
<[email protected]> on behalf of 
[email protected] 
<[email protected]>
Sent: Friday, June 5, 2020 6:00 AM
To: [email protected] <[email protected]>
Subject: gpfsug-discuss Digest, Vol 101, Issue 12

Send gpfsug-discuss mailing list submissions to
        [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
        http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
        [email protected]

You can reach the person managing the list at
        [email protected]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: Client Latency and High NSD Server Load Average
      (Valdis Kl=?utf-8?Q?=c4=93?=tnieks)


----------------------------------------------------------------------

Message: 1
Date: Thu, 04 Jun 2020 21:17:08 -0400
From: "Valdis Kl=?utf-8?Q?=c4=93?=tnieks" <[email protected]>
To: gpfsug main discussion list <[email protected]>
Subject: Re: [gpfsug-discuss] Client Latency and High NSD Server Load
        Average
Message-ID: <309214.1591319828@turing-police>
Content-Type: text/plain; charset="us-ascii"

On Thu, 04 Jun 2020 15:33:18 -0000, "Saula, Oluwasijibomi" said:

> However, I still can't understand why write IO operations are 5x more latent
> than ready operations to the same class of disks.

Two things that may be biting you:

First, on a RAID 5 or 6 LUN, most of the time you only need to do 2 physical
reads (data and parity block). To do a write, you have to read the old parity
block, compute the new value, and write the data block and new parity block.
This is often called the "RAID write penalty".

Second, if a read size is smaller than the physical block size, the storage 
array can read
a block, and return only the fragment needed.  But on a write, it has to read
the whole block, splice in the new data, and write back the block - a RMW (read
modify write) cycle.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: 
<http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20200604/da016913/attachment-0001.sig>

------------------------------

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


End of gpfsug-discuss Digest, Vol 101, Issue 12
***********************************************
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to