You could possibly create a new disk image if qemu recognizes the backing files.
Check if qemu detects the backing files:
#qemu-img info path/to/snapshot
If it shows the previous snapshot or full disk image as "backing file", you can
try to create a new image which inclused changes in snapshots:
Never mind that, finalization just took about an hour to finish, but its done
now.
I was able to successfully delete the VM, all is good again, thanks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
I did not try that!
found 2 transfers "paused by system" in /ovirt-engine/api/imagetransfers
POST
/ovirt-engine/api/imagetransfers/f48eb885-f82d-4f86-8e7f-cb995d1581f0/finalize
complete
POST
Yes i have, below is a snippet of DEBUG log:
[root] < POST
/ovirt-engine/api/vms/f9ec0eaa-1721-4114-a3b2-94fa6eca3f15/backups/10404c7c-1106-413b-9ced-bdd907e996a6/finalize
HTTP/1.1
[root] > [Cannot stop VM backup. There is an active image transfer
for VM backup]
[root] > Operation
Hi,
i was working on my own backup application (web front end for ovirtsdk) and i
somehow managed to get my Vm disks, stuck on status: "paused by system".
I have tried to stop the backup and finalize the backup manually, but nothing
works.
ovirtsdk4.Error: Fault reason is "Operation Failed".
I see, weird that i couldn't find that bug report with google, even after
spending so much time searching info on this topic.
So what is the preferred method of freeing up space now, since sparsifying is
disabled?
Does enabling "discard" for VM disk take care of freeing up used space taken by
Unfortunately yes, even VM's/disks on the NFS data domain are giving the same
error.
Just to be sure, i just created a completely new NFS data share on Truenas
imported the data domain to ovirt and created a new test vm with 2 thin
provisioned disks on it, cannot sparsify neither:
"Error
Hello,
i cannot get sparsifying to work in lastest oVirt 4.5.4 release (not that i
have tried with a previous release)
I have two Data domains connected to oVirt using iSCSI & NFS.
I created 2 testvm's and 50GB thin provisioned qcow2 disks for each, made some
temp files, remove them and
Hi,
i had a similar issue yesterday, "Wait for the host to be up", on a 3 node HCI
gluster setup, using Lates ovirt-node.
The issue was that ansible/playbook failed to update temp private VM ip (from
/etc/hosts, created by ansible/hosted-engine), i received no error of this
dyring the
9 matches
Mail list logo