Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Luis Bolinches
HI   Just want to add 2 things   +1 to have more than 1 copy of Metadata   The full stride write is important as repetitively been stated here. However there are implementations that have successfully reduced  this (IBM FlashCore comes to mind). On the last London UG I presented different storage

Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Jonathan Buzzard
On Wed, 2018-09-05 at 13:37 -0400, Frederick Stock wrote: > Another option for saving space is to not keep 2 copies of the > metadata within GPFS.  The SSDs are mirrored so you have two copies > though very likely they share a possible single point of failure and > that could be a deal breaker.  I

Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Achim Rehor
Hi Kevin, as you already pointed out, having a RAID stripe size (or a multiple of it) not matching GPFS blocksize, is a bad idea. Every write would cause a read-modify-write operation to keep the parity. So for data LUNs RAID5 with 4+P or 8+P is fully ok. For metadata, if you are keen on

Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Buterbaugh, Kevin L
Hi All, Wow - my query got more responses than I expected and my sincere thanks to all who took the time to respond! At this point in time we do have two GPFS filesystems … one which is basically “/home” and some software installations and the other which is “/scratch” and “/data” (former

Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Marc A Kaplan
Perhaps repeating myself, but consider no-RAID or RAID "0" and -M MaxMetadataReplicas Specifies the default maximum number of copies of inodes, directories, and indirect blocks for a file. Valid values are 1, 2, and 3. This value cannot be less than the value of DefaultMetadataReplicas. The

Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Marc A Kaplan
A somewhat smarter RAID controller will "only" need to read the old values of the single changed segment of data and the corresponding parity segment, and know the new value of the data block. Then it can compute the new parity segment value. Not necessarily the entire stripe. Still 2 reads

Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Bryan Banister
I have questions about how the GPFS metadata replication of 3 works. 1. Is it basically the same as replication of 2 but just have one more copy, making recovery much more likely? 1. If there is nothing that is checking that the data was correctly read off of the device (e.g. CRC

Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Aaron Knister
Answers inline based on my recollection of experiences we've had here: On 9/6/18 12:19 PM, Bryan Banister wrote: I have questions about how the GPFS metadata replication of 3 works. 1. Is it basically the same as replication of 2 but just have one more copy, making recovery much more

Re: [gpfsug-discuss] RAID type for system pool

2018-09-06 Thread Simon Thompson
I thought reads were always round robin's (in some form) unless you set readreplicapolicy. And I thought with fsstruct you had to use mmfsck offline to fix. Simon From: gpfsug-discuss-boun...@spectrumscale.org [gpfsug-discuss-boun...@spectrumscale.org]