On Thu, Sep 17, 2009 at 7:36 PM, Matthew Ingersoll <[email protected]>wrote:

> I'm testing a 2 node primary/primary drbd setup but had a few concerns
> related to the use of md for raid0 striping.  The setup is as follows:
>
> Each node runs two drbd devices in a primary/primary setup.  These devices
> are then striped using the mdadm utility.  From there, logical volumes are
> setup using LVM (I'm running ais + clvmd to sync the nodes).  The following
> output should explain most of this (identical on node00 and node01):
>
> r...@node00:~# cat /proc/mdstat
> Personalities : [raid0]
> md0 : active raid0 drbd1[1] drbd0[0]
>      117190272 blocks 64k chunks
>
> r...@node00:~# pvs -a
>  PV         VG   Fmt  Attr PSize   PFree
>  /dev/md0   san0 lvm2 a-   111.76G 31.76G
>
>
> From there, logical volumes are created and shared via iscsi.  Doing
> round-robin tests on iscsi has not shown any corruption yet (this means i'm
> reading/writing to both node00 and node01).  My main concern is the striping
> portion and what is actually going on there. I also tested the striping
> using only LVM - this appears to work fine too.
>
>
And what about instead do somthing like this:

- sda and sdb on both nodes
- raid0 on both nodes so that on eac node you have one md0 device
- only one drbd resource based on md0 on both
- use drbd0 device as PV for your VG
- add clvmd to your cluster layer (you need cman too for clvmd)

I'm doing it but only with one disk per node
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to