Hello,
It looks like this was the problem indeed.
I have the migration policy set to post copy (thought this was relevant
only to VM migration and not disk migration) and had
libvirt-4.5.0-23.el7_7.6.x86_64 on the problematic hosts. Restarting the
VDSM after the migration indeed resolved the
Sorry for the late reply, but you may have hit this bug[1], I forgot about it.
The bug happens when you live migrate a VM in post-copy mode, vdsm
stops monitoring the VM's jobs.
The root cause is an issue in libvirt, so it depends on which libvirt
version you have
[1]
Hello,
I tried the live migrate as well and it didn't help (it failed).
The VM disks were in a illegal state so I ended up restoring the VM from
backup (It was least complex solution for my case).
Thank you both for the help.
Regards,
On Thu, May 28, 2020 at 5:01 PM Strahil Nikolov
wrote:
>
I used to have a similar issue and when I live migrated (from 1 host to
another) it automatically completed.
Best Regards,
Strahil Nikolov
На 27 май 2020 г. 17:39:36 GMT+03:00, Benny Zlotnik
написа:
>Sorry, by overloaded I meant in terms of I/O, because this is an
>active layer merge,
Hello,
Not sure IO could be the case. The underlying storage itself is brand new
(nvme) connected with FC and is barely at 10 % capacity with low IOPS and
practically zero latency. There are no IO limitations on the LUN itself. I
would also be able to see any IO problems on the other VMs (none in
Sorry, by overloaded I meant in terms of I/O, because this is an
active layer merge, the active layer
(aabf3788-8e47-4f8b-84ad-a7eb311659fa) is merged into the base image
(a78c7505-a949-43f3-b3d0-9d17bdb41af5), before the VM switches to use
it as the active layer. So if there is constantly
Hello,
Yes, no problem. XML is attached (I ommited the hostname and IP).
Server is quite big (8 CPU / 32 Gb RAM / 1 Tb disk) yet not overloaded. We
have multiple servers with the same specs with no issues.
Regards,
On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik wrote:
> Can you share the VM's
Can you share the VM's xml?
Can be obtained with `virsh -r dumpxml `
Is the VM overloaded? I suspect it has trouble converging
taskcleaner only cleans up the database, I don't think it will help here
___
Users mailing list -- users@ovirt.org
To
Hello,
Running virsh blockjob sda --info a couple of times it shows 99 or 100
%. Looks like it is stuck / flapping for some reason.
Active Block Commit: [ 99 %]
Active Block Commit: [100 %]
What would be the best approach to resolve this?
I see taskcleaner.sh can be used in cases like these?
You can't see it because it is not a task, tasks only run on SPM, It
is a VM job and the data about it is stored in the VM's XML, it's also
stored in the vm_jobs table.
You can see the status of the job in libvirt with `virsh blockjob
sda --info` (if it's still running)
On Wed, May 27, 2020
Hello,
Thank you for the reply.
Unfortunately I cant see the task on any on the hosts:
vdsm-client Task getInfo taskID=f694590a-1577-4dce-bf0c-3a8d74adf341
vdsm-client: Command Task.getInfo with args {'taskID':
'f694590a-1577-4dce-bf0c-3a8d74adf341'} failed:
(code=401, message=Task id unknown:
Live merge (snapshot removal) is running on the host where the VM is
running, you can look for the job id
(f694590a-1577-4dce-bf0c-3a8d74adf341) on the relevant host
On Wed, May 27, 2020 at 9:02 AM David Sekne wrote:
>
> Hello,
>
> I'm running oVirt version 4.3.9.4-1.el7.
>
> After a failed live
12 matches
Mail list logo