On 01/02/2013 03:37 AM, Jeff Darcy wrote:
On 1/2/13 6:01 AM, Brian Candler wrote:
On Fri, Dec 28, 2012 at 10:14:19AM -0800, Joe Julian wrote:
My volume would then look like
gluster volume create replica 3
server{1,2,3}:/data/glusterfs/vmimages/a/brick
server{1,2,3}:/data/glusterfs/vmimages/b/brick
server{1,2,3}:/data/glusterfs/vmimages/c/brick
server{1,2,3}:/data/glusterfs/vmimages/d/brick
Aside: what is the reason for creating four multiple logical volumes/bricks
on the same node, and then combining them together using gluster
distribution?
I'm not Joe, but I can think of two reasons why this might be a good idea.  One
is superior fault isolation.  With a single concatenated or striped LV (i.e. no
redundancy as with true RAID), a failure of any individual disk will appear as
a failure of the entire brick, forcing *all* traffic to the peers.  With
multiple LVs, that same failure will cause only 1/4 of the traffic to fail
over.  The other reason is performance.  I've found that it's very hard to
predict whether letting LVM schedule across disks or letting GlusterFS do so
will perform better for any given workload, but IMX the latter tends to win
slightly more often than not.

Fault isolation is, indeed, why I do that. I don't need any faster reads than my network will handle, so raid isn't going to help me there. When a drive fails, gluster's (mostly) been good about handling that failure transparently to my services.

Also, why are you combining all your disks into a single
volume group (clustervg), but then allocating each logical volume from only
a single disk within that VG?
That part's a bit unclear to me as well.  There doesn't seem to be any
immediate benefit, but perhaps it's more an issue of preparing for possible
future change by adding an extra level of naming/indirection.  That way, if the
LVs need to be reconfigured some day, the change will be pretty transparent to
anything that was addressing them by ID anyway.

Aha! Because when a drive's in pre-failure I can pvmove the lv's onto the new drive, or onto the other drives temporarily.
_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to