On Mon, Jan 22, 2018 at 7:21 AM, Zip <pl...@intricatenetworks.com> wrote:

> I am having an issue where when I use the REST API to connect a snapshot
> from another VM to a Backup-Appliance-VM, after the clone when I remove the
> Disk and delete the Snapshot, the disk remains in the Backup-Appliance-VM
> as /dev/sdb ot /dev/vdb.
>
> If I reboot the Bakup-Applicance-VM the disk disappears.
>
> If I manually remove the disk by "echo 1 > /sys/block/sdb/device/delete”
> the disk will disappear, but if I rescan the scsi bus, it is found and
> shows up again in the VM OS, but the oVirt WebGUI does NOT show it as
> connected.
>

The first part is expected - the 2nd isn't.


>
> I am also not able to attach any other disks as it complains of :
>
> HotPlugDiskVDS failed: internal error: unable to execute QEMU command
> '__com.redhat_drive_add': Duplicate ID 'drive-scsi0-0-0-2' for drive
>
> I did see that others in the past have gotten around this issue by
> rebooting the Backup-Appliance-VM and then continuing on with the next VM
> backup and looping through backup-reboot-backup-reboot-etc.
>
> Anyone have an idea on how to solve this issue and remove the hooks from
> the guest OS?
>
> Steps to reproduce this issue:
>
>
>    1. Create a backup appliance VM to be used for the backup script
>    execution
>    2. Currently I have the Vms set to virtio with threaded I/O enabled.
>    Also tried virtio_scsi with same result.
>    3. Using REST API – make snapshot of target VM
>    4. Using REST API – fetch vm metadata
>    5. Using REST API – attach the snapshot/disk to the Backup-Appliance-VM
>    6. dd the drive to backup folder
>    7. Using REST API – remove the disk from the Backup-Appliance-VM
>    8. Using REST API – delete the snapshot
>    9. ** Check the guest OS of the Backup-Appliance-VM and the mounted
>    drive from the backup above still appears and behaves as mentioned in
>    comments above.
>
>
>
There are many details missing, including versions of everything used, but
logs would be most helpful here.


> A second issue is that the above wont work when I have the Vms running on
> MPIO iSCSI storage, so for testing I have moved to NFS4. Anyone have ideas
> about either issue, I’d love to hear ;)
>

Same - logs would be helpful here.
Y.


>
> Thanks
>
> Irc.oftc.net #ovirt
> zipur
>
>
>
> _______________________________________________
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to