I'm using a logic volume as drbd's underlying device (
/dev/lvmdatas/db for /dev/drbd1 ), with `before-resync-target
"/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 5 -- -c 16k";' and
`after-resync-target /usr/libdrbd/unsnapshot-resync-target-lm.sh;' in
drbd.conf.

Consider the following scenario:

nodeA (Primary) and nodeB (Secondary) were disconnected at first, and
their dist state were "UpToDate/Outdated"

When I connect the connect two nodes, nodeB will create the snapshot
`/dev/lvmdatas/db-lbefore-resync', which will be delete after resync
complete, and everthing will be fine.

But what if the the NodeA crash during the resync,  now the /dev/drbd1
odr /dev/lvmdatas/db have the Inconsistent data, while
/dev/lvmdats/db-before-snyc have the consistent, but outdated data. I
would like to revcery the  /dev/lvmdatas/db from
/dev/lvmdats/db-before-snyc and then bring up the database.

However, drbd is above the logic volume, my problem was: how should I
recovery the /dev/lvmdatas/db from snapshot, without breaking up drbd
metadata?




On Sat, Aug 27, 2011 at 2:28 AM, Dan Barker <[email protected]> wrote:
>
> Reading between the lines on this thread, I think you have mixed access 
> paths, and now believe that drbd is somehow involved in your troubles.
>
> It is perfectly correct to do what you attempted, IE: Break the mirror, test 
> some process, reestablish the mirror using the original data, and continue on.
>
> Assuming the drbd resource r0 as /dev/drbd0, stored on /dev/sdb1 on nodeA 
> (Primary) and nodeB (Secondary):
>
> This scenario would be accomplished by:
>
> On NodeB:
> =========
> drbdadm disconnect r0
> drbdadm primary r0
> mount /somewhere /dev/drbd0
>
> ... do your test on nodeB ...
>
> umount /dev/drbd0
> drbdadm secondary r0
> drbdadm -- --discard-my-data connect r0
>
> and the nodes will sync up to the original data. The main node, nodeA always 
> has the resource as primary, and goes from Connected to WFConnection to 
> SyncSource to Connected. No access to the physical device, /dev/sdb1 is done 
> by any process other than drbd. You never stop drbd on either node.
>
> I'm guessing that at some point you mounted /dev/sdb1 instead of /dev/drbd0 
> and that is the source of your problems; Updates occurred that drbd did not 
> see. Using the full disk rather than a partition (/dev/sdb instead of 
> /dev/sdb1 in this case) could assist in preventing you from shooting yourself 
> in the foot. But, you can ALWAYS shoot yourself in the foot.
>
> I do not understand your comment "I shut down drbd, try to use rsync to 
> recovery the snapshot, but failed" It sounds like there is a Logical Volume 
> Manager involved, or some other underlying block device that supports 
> snapshots.
>
> Again, drbd can only replicate changes to a block device that occur while 
> drbd is handling that device. The observations above can be applied to any 
> block device. If you expect drbd to be able to handle the block device, it 
> must be the only access to that device; Connected or not, Primary or 
> Secondary.
>
> hth
>
> Dan Barker
>
> -----Original Message-----
> From: [email protected] 
> [mailto:[email protected]] On Behalf Of Arnold Krille
> Sent: Friday, August 26, 2011 2:08 PM
> To: [email protected]
> Subject: Re: [DRBD-user] data integrity in drbd
>
> On Friday 26 August 2011 19:50:29 you wrote:
> > In my situation, what should I do to completely resync the data of
> > secondary node, including the drbd metadata ?
>
> Delete the secondaries disk, create the meta-data anew and make it sync 
> completely from the primary. And if something is still wrong in the files on 
> disk, restore them from the backup...
>
> (And don't ask public questions in private:)
>
> Have a nice weekend,
>
> Arnold
>
> _______________________________________________
> drbd-user mailing list
> [email protected]
> http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to