Re: [gpfsug-discuss] Use of commodity HDs on large GPFS client base clusters?

2016-03-19 Thread Marc A Kaplan
IBM ESS, GSS, GNR, and Perseus refer to the same "declustered" IBM 
raid-in-software technology with advanced striping and error recovery.

I just googled some of those terms and hit this not written by IBM 
summary:

http://www.raidinc.com/file-storage/gss-ess

Also, this is now a "mature" technology. IBM has been doing this since 
before 2008.  See pages 9 and 10 of:

http://storageconference.us/2008/presentations/2.Tuesday/6.Haskin.pdf



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Use of commodity HDs on large GPFS client base clusters?

2016-03-18 Thread Marc A Kaplan
IBM ESS, GSS, GNR, and Perseus refer to the same "declustered" IBM 
raid-in-software technology with advanced striping and error recovery.

I just googled some of those terms and hit this not written by IBM 
summary:

http://www.raidinc.com/file-storage/gss-ess

Also, this is now a "mature" technology. IBM has been doing this since 
before 2008.  See pages 9 and 10 of:

http://storageconference.us/2008/presentations/2.Tuesday/6.Haskin.pdf



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Use of commodity HDs on large GPFS client base clusters?

2016-03-15 Thread Oesterlin, Robert
Hi Jamie

I have some fairly large clusters (tho not as large as you describe) running on 
“roll your own” storage subsystem of various types. You’re asking a broad 
question here on performance and rebuild times. I can’t speak to a comparison 
with ESS (I’m sure IBM can comment) but if you want to discuss some of my 
experiences with larger clusters, HD, performace (multi PB) I’d be happy to do 
so. You can drop me a note: robert.oester...@nuance.com and we can chat at 
length.

Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid



From: 
<gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of Jaime Pinto 
<pi...@scinet.utoronto.ca<mailto:pi...@scinet.utoronto.ca>>
Reply-To: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date: Tuesday, March 15, 2016 at 2:39 PM
To: "gpfsug-disc...@gpfsug.org<mailto:gpfsug-disc...@gpfsug.org>" 
<gpfsug-disc...@gpfsug.org<mailto:gpfsug-disc...@gpfsug.org>>
Subject: [gpfsug-discuss] Use of commodity HDs on large GPFS client base 
clusters?

I'd like to hear about performance consideration from sites that may
be using "non-IBM sanctioned" storage hardware or appliance, such as
DDN, GSS, ESS (we have all of these).

For instance, how could that compare with ESS, which I understand has
some sort of "dispersed parity" feature, that substantially diminishes
rebuilt time in case of HD failures.

I'm particularly interested on HPC sites with 5000+ clients mounting
such commodity NSD's+HD's setup.

Thanks
Jaime


---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477


This message was sent using IMP at SciNet Consortium, University of Toronto.


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=CwICAg=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU=UIC7jY_blq8j34WiQM1a8cheHzbYW0sYS-ofA3if_Hk=MtunFkJSGpXWNdEkMqluTY-CYIC4uaMz7LiZ7JFob8c=

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Use of commodity HDs on large GPFS client base clusters?

2016-03-15 Thread Jaime Pinto
I'd like to hear about performance consideration from sites that may  
be using "non-IBM sanctioned" storage hardware or appliance, such as  
DDN, GSS, ESS (we have all of these).


For instance, how could that compare with ESS, which I understand has  
some sort of "dispersed parity" feature, that substantially diminishes  
rebuilt time in case of HD failures.


I'm particularly interested on HPC sites with 5000+ clients mounting  
such commodity NSD's+HD's setup.


Thanks
Jaime


---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477


This message was sent using IMP at SciNet Consortium, University of Toronto.


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss