to RAID 5/6 that is a
little more efficient while maintaining redundancy?)
- am I missing anything (either re. GlusterFS or other alternatives)
Thanks very much for any suggestions and advice.
Miles Fidelman
--
In theory, there is no difference between theory and practice.
Infnord practice
attached - I'm not
really in a position to split things up.)
Any comments, advice, suggestions are most welcome.
Thanks very much,
Miles Fidelman
--
In theory, there is no difference between theory and practice.
Infnord practice, there is. Yogi Berra
-in and migration - i.e. how to get from 2
production nodes running Xen,DRBD,Pacemaker to 4 nodes running
Gluster/Xen/pacemaker or some other failover capability?
Or are we barking up the wrong tree entirely?
Thanks very much,
Miles Fidelman
--
In theory, there is no difference between theory
Brian Candler wrote:
On Wed, Feb 15, 2012 at 08:22:18PM -0500, Miles Fidelman wrote:
We've been running a 2-node, high-availability cluster - basically xen
w/ pacemaker and DRBD for replicating disks. We recently purchased 2
additional, servers, and I'm thinking about combining all 4 machines
: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Miles Fidelman
Sent: Friday, 17 February 2012 5:47 AM
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] question re. current state of art/practice
Brian Candler wrote:
On Wed, Feb 15, 2012 at 08:22
A quick follow-up question:
Thomas Jackson wrote:
We set up a 4 node cluster using Gluster with KVM late last year - which has
been running along quite nicely for us.
snip
I'd definitely say this approach at least worth a look for anyone building a
small VM cluster. Gluster is still a bit
,Pacemaker to 4 nodes running
Gluster/Xen/pacemaker or some other failover capability?
Or are we barking up the wrong tree entirely?
Thanks very much,
Miles Fidelman
--
In theory, there is no difference between theory and practice.
In practice, there is. Yogi Berra
to get answered
Miles Fidelman
--
In theory, there is no difference between theory and practice. In
practice, there is. Yogi Berra
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
in such an environment? If so, any
suggestions as to how to configure things?
Thank you very much,
Miles Fidelman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
to the point
that I can use Gluster this way?
Thanks very much,
Miles Fidelman
--
In theory, there is no difference between theory and practice.
In practice, there is. Yogi Berra
___
Gluster-users mailing list
Gluster-users@gluster.org
http
putting in
the effort of moving forward with some experimentation, or whether this
is a non-starter. Is there anyone out there who's tried to run this
kind of mini-cloud with gluster? What kind of results have you had?
On 12/26/2012 08:24 PM, Miles Fidelman wrote:
Hi Folks,
I find myself
Gerald Brandt wrote:
On 12-12-26 10:24 PM, Miles Fidelman wrote:
The thing is, I'm trying to add 2 nodes to the cluster, and DRBD
doesn't scale. Also, as a function of rackspace limits, and the
hardware at hand, I can't separate storage nodes from compute nodes -
instead, I have to live
Brian Candler wrote:
On Wed, Dec 26, 2012 at 11:24:25PM -0500, Miles Fidelman wrote:
I find myself trying to expand a 2-node high-availability cluster
from to a 4-node cluster. I'm running Xen virtualization, and
currently using DRBD to mirror data, and pacemaker to failover
cleanly
Let me point out that this Stephan's rant (or constructive criticism)
has hijacked responses to my original question about how well (or not)
gluster will work for my specific situation. While the general comments
are relevant (as in it won't work for anyone, here's why), it does
take the
Dan Cyr wrote:
Miles - As is right now GlusterFS is not what you want for backend VM
storage.
Question: “how well will this work”
Answer: “horribly”
Ok... that's the kind of answer I was looking for (though a
disappointing one).
Thanks,
Miles
--
In theory, there is no difference
is in 3.3 is the
locking and performance problems during disk rebuilds which is now
done at a much more granular level and I have successfully self-healed
several vm images simultaneously while doing it on all of them without
any measurable delays.
Miles Fidelman mfidel...@meetinghouse.net wrote
failures)
Miles Fidelman
--
In theory, there is no difference between theory and practice.
In practice, there is. Yogi Berra
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
it work for your use case.
On 12/27/2012 03:00 PM, Miles Fidelman wrote:
Ok... now that's diametrically the opposite response from Dan Cyr's
of a few minutes ago.
Can you say just a bit more about your configuration - how many
nodes, do you have storage and processing combined or separated, how
Jeff Darcy wrote:
On 12/27/12 6:47 PM, Miles Fidelman wrote:
John Mark Walker wrote:
In general, I don't recommend any distributed filesystems for VM
images, but I can also see that this is the wave of the future.
Ok. I can see that.
Let's say that I take a slightly looser approach to high
Jeff,
Thanks for the details. If I might trouble you for a few more...
Jeff Darcy wrote:
On 12/30/12 1:33 PM, Miles Fidelman wrote:
What's the alternative, though? Ok, for application files (say a word
processing document) that works, but what about spools, databases, and
such? Seems like
strategy makes sense (or if
there's a better approach I'm not considering)?
Thanks very much,
Miles Fidelman
--
In theory, there is no difference between theory and practice.
In practice, there is. Yogi Berra
___
Gluster-users mailing list
Gluster
21 matches
Mail list logo