I don't have time to write up a long answer right now (work's killing me today) but if you search for lvm on the IRC log, we had a bit of a discussion about that a few days (or was it a week... they're all blending together) ago.

On 01/10/2013 03:06 PM, Gaurav P wrote:
*bump*


On Mon, Jan 7, 2013 at 8:13 PM, Gaurav P <[email protected] <mailto:[email protected]>> wrote:

    Hi,

    I've been reading up on GlusterFS and I'm looking for best
    practices around using multiple disks as bricks in servers that
    will be part of a replicated volume.

    Say I start with a single disk each in two servers (/dev/sda1
    mounted at /a)

    gluster volume create test-volume replica 2 transport tcp server1:/a 
server2:/a


    Then I add a second disk in each server (/dev/sdb1 mounted at /b)

    gluster volume add-brick test-volume replica 2 transport tcp server1:/b 
server2:/b


    With this (after rebalancing), am I correct in understanding that
    I will have a distributed replicated volume with GlusterFS
    providing the equivalent of RAID1+0 for data on my volume.

    Now as I understand, I will be restricted to adding disks (bricks)
    of the same size whenever I need to extend the volume. What are
    the pros/cons of instead using LVM to provide a single LV on each
    server and extending the LV and filesystem each time I add
    additional storage? The other benefit to LVM being the ability to
    take snapshots. The one downside I foresee is that a concatenated
    LV will not use the second PV (disk) till the first PV is full,
    though I could perhaps stripe?

    More questions to follow, but I'm trying to think through this
    before I get started with my first deployment.

    TIA
    Gaurav




_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to