Some thoughts: 
you give typical cumulative usage values. However, a fast pool might 
matter most for spikes of the traffic. Do you have spikes driving your 
current system to the edge? 

Then: using the SSD pool for writes is straightforward (placement), using 
it for reads will only pay off if data are either pre-fetched to the pool 
somehow, or read more than once before getting migrated back to the HDD 
pool(s). Write traffic is less than read as you wrote. 

RAID1 vs RAID6: RMW penalty of parity-based RAIDs was mentioned, which 
strikes at writes smaller than the full stripe width of your RAID - what 
type of write I/O do you have (or expect)? (This may also be important for 
choosing the quality of SSDs, with RMW in mind you will have a comparably 
huge amount of data written on the SSD devices if your I/O traffic 
consists of myriads of small IOs and you organized the SSDs in a RAID5 or 
RAID6)

I suppose your current system is well set to provide the required 
aggregate throughput. Now, what kind of improvement do you expect? How are 
the clients connected? Would they have sufficient network bandwidth to see 
improvements at all?




 
Mit freundlichen Grüßen / Kind regards

 
Dr. Uwe Falke
 
IT Specialist
High Performance Computing Services / Integrated Technology Services / 
Data Center Services
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefa...@de.ibm.com
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: 
Andreas Hasse, Thorsten Moehring
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, 
HRB 17122 


gpfsug-discuss-boun...@spectrumscale.org wrote on 04/19/2017 09:53:42 PM:

> From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Date: 04/19/2017 09:54 PM
> Subject: [gpfsug-discuss] RAID config for SSD's used for data
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> 
> Hi All, 
> 
> We currently have what I believe is a fairly typical setup ? 
> metadata for our GPFS filesystems is the only thing in the system 
> pool and it?s on SSD, while data is on spinning disk (RAID 6 LUNs). 
> Everything connected via 8 Gb FC SAN.  8 NSD servers.  Roughly 1 PB 
> usable space.
> 
> Now lets just say that you have a little bit of money to spend. 
> Your I/O demands aren?t great - in fact, they?re way on the low end 
> ? typical (cumulative) usage is 200 - 600 MB/sec read, less than 
> that for writes.  But while GPFS has always been great and therefore
> you don?t need to Make GPFS Great Again, you do want to provide your
> users with the best possible environment.
> 
> So you?re considering the purchase of a dual-controller FC storage 
> array with 12 or so 1.8 TB SSD?s in it, with the idea being that 
> that storage would be in its? own storage pool and that pool would 
> be the default location for I/O for your main filesystem ? at least 
> for smaller files.  You intend to use mmapplypolicy nightly to move 
> data to / from this pool and the spinning disk pools.
> 
> Given all that ? would you configure those disks as 6 RAID 1 mirrors
> and have 6 different primary NSD servers or would it be feasible to 
> configure one big RAID 6 LUN?  I?m thinking the latter is not a good
> idea as there could only be one primary NSD server for that one LUN,
> but given that:  1) I have no experience with this, and 2) I have 
> been wrong once or twice before (<grin>), I?m looking for advice. 
Thanks!
> 
> ?
> Kevin Buterbaugh - Senior System Administrator
> Vanderbilt University - Advanced Computing Center for Research and 
Education
> kevin.buterba...@vanderbilt.edu - (615)875-9633
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to