We are using our 612i22s SAN.
It works great with cloudstack.
Check them out at www.nfinausa.com
Warren
On 2013-03-14 15:10, Bryan Whitehead wrote:
I've been happy with glusterfs. for a sharedmountpoint kind of
storage. Works great.
(Note: invested in Infiniband to make things fast)
On Thu, Mar 14, 2013 at 11:51 AM, Musayev, Ilya <imusa...@webmd.net>
wrote:
I've seen several folks build their own storage clusters and use
Nexenta.
We use EMC VMAX.
-----Original Message-----
From: David Nalley [mailto:da...@gnsa.us]
Sent: Thursday, March 14, 2013 12:52 PM
To: cloudstack-users@incubator.apache.org
Subject: Re: What's everyone using for primary storage?
On Thu, Mar 14, 2013 at 11:27 AM, Kirk Jantzer
<kirk.jant...@gmail.com>
wrote:
> For my testing, because of available resources, I chose local
storage.
> However, shared storage (like GlusterFS or the like) is
appealing.
> What are you using? Why did you chose what you're using? How many
> instances are you running? etc.
> Thanks!!
So I think it depends on workload and use case.
If your applications are highly fault tolerant and largely
stateless, you simply
may not care about HA. Besides you'll almost certainly get better
performance with local storage.
If you are running a test/dev environment, you can probably
tolerate
instance failure, so why use shared storage.
If people are going to come scream at you and threaten to take
money away
if something fails, perhaps you want something a bit more robust.
The best of both worlds (with plenty of compromises too) is
distributed,
replicated shared storage like Ceph RBD. (GlusterFS 3.4, with all
of the work
that IBM has done around KVM is promising, but yet to be released.
Early
versions were robust, but had problems at any scale providing
decent IO)
Sheepdog is also promising and I keep hearing there are patches
incoming for
Sheepdog support. Of course these are all KVM-only for the moment.
There are also plenty of people who do both local and shared
storage.
With higher internal costs for deploying to shared storage with the
assumption that folks would do it for things that need a higher
level of
resiliency or less tolerance for failure.
For shared storage, I've seen everything from NFS running on Linux
to things
like Isilon, Netapp, and EMC - again the choice depending on the
tolerance
for failure.
--David