On 09/14/2016 02:55 PM, Ilya Dryomov wrote:
> On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov <[email protected]> wrote:
>>
>>
>> On 09/14/2016 09:55 AM, Adrian Saul wrote:
>>>
>>> I found I could ignore the XFS issues and just mount it with the 
>>> appropriate options (below from my backup scripts):
>>>
>>>         #
>>>         # Mount with nouuid (conflicting XFS) and norecovery (ro snapshot)
>>>         #
>>>         if ! mount -o ro,nouuid,norecovery  $SNAPDEV /backup${FS}; then
>>>                 echo "FAILED: Unable to mount snapshot $DATESTAMP of $FS - 
>>> cleaning up"
>>>                 rbd unmap $SNAPDEV
>>>                 rbd snap rm ${RBDPATH}@${DATESTAMP}
>>>                 exit 3;
>>>         fi
>>>         echo "Backup snapshot of $RBDPATH mounted at: /backup${FS}"
>>>
>>> It's impossible without clones to do it without norecovery.
>>
>> But shouldn't freezing the fs and doing a snapshot constitute a "clean
>> unmount" hence no need to recover on the next mount (of the snapshot) -
>> Ilya?
> 
> I *thought* it should (well, except for orphan inodes), but now I'm not
> sure.  Have you tried reproducing with loop devices yet?

Here is what the checksum tests showed:

fsfreeze -f  /mountpoit
md5sum /dev/rbd0
f33c926373ad604da674bcbfbe6460c5  /dev/rbd0
rbd snap create xx@xxx && rbd snap protect xx@xxx
rbd map xx@xxx
md5sum /dev/rbd1
6f702740281874632c73aeb2c0fcf34a  /dev/rbd1

where rbd1 is a snapshot of the rbd0 device. So the checksum is indeed
different, worrying.

> 
> Thanks,
> 
>                 Ilya
> 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to