Dear all,

 I am doing more experiments with Ceph iSCSI gateway and I am a bit confused on 
how to properly repurpose an RBD image from iSCSI target into QEMU virtual disk 
and back

 First, I create an RBD image and set it up as iSCSI backstore in gwcli, 
specifying its size exactly to avoid unwanted resizes
 Next, I connect Windows 2008 R2 to this image (enable MPIO before connect and 
select MPIO policy 'Failover only' for the accessed device)
 Then in Windows Disk Management I initialize the physical disk with GPT, 
convert it into Dynamic disk and create a simple NTFS volume in its free space
 Then in the same console I put the disk 'offline', and in iSCSI control panel 
I disconnect the session from Windows side

 Then I attach the same RBD image to QEMU/KVM virtual machine with Ubuntu 18.04 
as virtio/librbd storage drive
 Then I boot Ubuntu 18.04 VM, find NTFS filesystem using 'ldmtool create all', 
and during ntfsclone from external disk I discover that RBD image is mapped 
read-only
 Ok, I stop Ubuntu VM, do 'rbd lock rm' for this image (lock is held by 
tcmu-runner, I suppose), restart Ubuntu, restart ntfsclone, and this time it is 
going well.
 Btw, ntfsclone onto device-mapper target created by ldmtool is going about 2x 
faster than directly onto Virtio Disk (vdN), so it transferred my 1600+GB in 
just 13+ hours instead of 27+ 

 Ok, external NTFS is cloned seemingly well, I shutdown Ubuntu VM (it properly 
removed the RBD lock on shutdown) and try to access it from Windows by iSCSI 
again.
 And at this moment I stumble into trouble. First, I don't see added RBD image 
in 'Devices' on iSCSI initiator control panel. This I tried to resolve by 
restarting tcmu-runner.
 After reconnect from Windows side, RBD image became visible in devices (and 
RBD lock from tcmu side was reacquired), 
 but its MPIO button was disabled, so I could not check or change MPIO policy 
(surely I enable MPIO in 'Connect' dialog).
 I tried also to restart rbd-target-gw but this also did not help. Restarting 
Windows server also did not improve the situation (MPIO button still disabled).
 What should I try to restart next, to avoid restarting the whole Ceph host ? 
May be unload/reload some kernel modules ?

 Thanks in advance for your help. Hope I could determine and resolve the 
problem myself, but this could take more time than getting help from you.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to