Yes.  Really the only other option we have (and not a bad one) is getting a 
v7000 Unified in there (if we can get the price down far enough).  That’s not a 
bad option since all they really want is SMB shares in the remote.  I just keep 
thinking a set of servers would do the trick and be cheaper.



From: Zachary Giles <zgi...@gmail.com<mailto:zgi...@gmail.com>>
Reply-To: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date: Friday, March 4, 2016 at 10:26 AM
To: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Small cluster

You can do FPO for non-Hadoop workloads. It just alters the disks below the 
GPFS filesystem layer and looks like a normal GPFS system (mostly).  I do think 
there were some restrictions on non-FPO nodes mounting FPO filesystems via 
multi-cluster.. not sure if those are still there.. any input on that from IBM?

If small enough data, and with 3-way replication, it might just be wise to do 
internal storage and 3x rep. A 36TB 2U server is ~$10K (just common throwing 
out numbers), 3 of those per site would fit in your budget.

Again.. depending on your requirements, stability balance between 'science 
experiment' vs production, GPFS knowledge level, etc etc...

This is actually an interesting and somewhat missing space for small 
enterprises. If you just want 10-20TB active-active online everywhere, say, for 
VMware, or NFS, or something else, there arent all that many good solutions 
today that scale down far enough and are a decent price. It's easy with many 
many PB, but small.. idk. I think the above sounds good as anything without 
going SAN-crazy.



On Fri, Mar 4, 2016 at 11:21 AM, 
mark.b...@siriuscom.com<mailto:mark.b...@siriuscom.com> 
<mark.b...@siriuscom.com<mailto:mark.b...@siriuscom.com>> wrote:
I guess this is really my question.  Budget is less than $50k per site and they 
need around 20TB storage.  Two nodes with MD3 or something may work.  But could 
it work (and be successful) with just servers and internal drives?  Should I do 
FPO for non hadoop like workloads?  I didn’t think I could get native raid 
except in the ESS (GSS no longer exists if I remember correctly).  Do I just 
make replicas and call it good?


Mark

From: 
<gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of Marc A Kaplan <makap...@us.ibm.com<mailto:makap...@us.ibm.com>>
Reply-To: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date: Friday, March 4, 2016 at 10:09 AM
To: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Small cluster

Jon, I don't doubt your experience, but it's not quite fair or even sensible to 
make a decision today based on what was available in the GPFS 2.3 era.

We are now at GPFS 4.2 with support for 3 way replication and FPO.
Also we have Raid controllers, IB, and "Native Raid" and ESS, GSS solutions and 
more.

So more choices, more options, making finding an "optimal" solution more 
difficult.

To begin with, as with any provisioning problem, one should try to state: 
requirements, goals, budgets, constraints, failure/tolerance models/assumptions,
expected workloads, desired performance, etc, etc.



This message (including any attachments) is intended only for the use of the 
individual or entity to which it is addressed and may contain information that 
is non-public, proprietary, privileged, confidential, and exempt from 
disclosure under applicable law. If you are not the intended recipient, you are 
hereby notified that any use, dissemination, distribution, or copying of this 
communication is strictly prohibited. This message may be viewed by parties at 
Sirius Computer Solutions other than those named in the message header. This 
message does not contain an official representation of Sirius Computer 
Solutions. If you have received this communication in error, notify Sirius 
Computer Solutions immediately and (i) destroy this message if a facsimile or 
(ii) delete this message immediately if this is an electronic communication. 
Thank you.

Sirius Computer Solutions<http://www.siriuscom.com>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




--
Zach Giles
zgi...@gmail.com<mailto:zgi...@gmail.com>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to