Well yes. However, there were many reasons for the failures. Provide some
logs if you have similar problems.  The error in the specific case i had
was referring to the secondary storage image, since the migration first
copied the image to secondary storage. If this is your problem make sure
that secondary storage is not full, is properly visible and mounted from
your hypervisors, there is no MTU issue in the path (in case you use jumbo
frames).

On Tue, Jul 26, 2022 at 3:10 PM Sina Kashipazha <s.kashipa...@protonmail.com>
wrote:

> Hey Curios,
>
> Did you find any work around for this issue?
>
> Kind regards,
> Sina
>
>
>
> ------- Original Message -------
> On Monday, March 28th, 2022 at 19:07, Curious Pandora <p4nd...@gmail.com>
> wrote:
>
>
> >
>
> >
>
> > Hello,
> >
>
> > We are in the process of migrating some of our VMs to a new zone wide NFS
> > primary storage.
> >
>
> > Some of the live and offline migrations went smoothly. However, no more
> > migrations can take place at the moment.
> >
>
> > For an offline migration the host selected to do it by the scheduler
> > reports in cloudstack agent.log
> >
>
> > 2022-03-28 19:32:08,636 INFO [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-1:null) (logid:21246b44) Trying to fetch storage
> pool
> > e7f12bcd-9fae-388d-bbbe-cf0984e7776d from libvirt
> > 2022-03-28 19:32:08,696 INFO [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-1:null) (logid:21246b44) Attempting to create
> volume
> > 8f59974a-e5f8-4ba3-9808-3db35f3f3 af1.qcow2 (NetworkFilesystem) in
> > pool e7f12bcd-9fae-388d-bbbe-cf0984e7776d with size (80.00 GB)
> 85899345920
> > 2022-03-28 19:32:08,841 ERROR [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-1:null) (logid:21246b44) Failed to create
> > /mnt/e7f12bcd-9fae-388d-bbbe-cf0984e7776d/8f
> > 59974a-e5f8-4ba3-9808-3db35f3f3af1.qcow2 due to a failed executing of
> > qemu-img: qemu-img:
> >
> /mnt/e7f12bcd-9fae-388d-bbbe-cf0984e7776d/8f59974a-e5f8-4ba3-9808-3db35f3f3af1.qcow2:
> >
>
> > Image is not in qcow2 formatFormatting
> >
> '/mnt/e7f12bcd-9fae-388d-bbbe-cf0984e7776d/8f59974a-e5f8-4ba3-9808-3db35f3f3af1.qcow2',
> > fmt=qcow2 size=85899345920 cluster_size=65536 preallocation=off
> > lazy_refcounts=off refcount_bits=16
> > 2022-03-28 19:32:08,928 ERROR [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-1:null) (logid:21246b44) Failed to convert
> > /san/primary/vm_vol1/2dbd1ee3-9ae7-4e0c-b52 e-c9767ea33208 to
> >
> /mnt/e7f12bcd-9fae-388d-bbbe-cf0984e7776d/8f59974a-e5f8-4ba3-9808-3db35f3f3af1.qcow2
> > the error was: qemu-img:
> >
> /mnt/e7f12bcd-9fae-388d-bbbe-cf0984e7776d/8f59974a-e5f8-4ba3-9808-3db35f3f3af1.qcow2:
> > error while converting qcow2: Image is not in qcow2 format
> > 2022-03-28 19:32:08,928 INFO [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-1:null) (logid:21246b44) Attempting to remove
> storage
> > pool e7f12bcd-9fae-388d-bbbe-cf0 984e7776d from libvirt
> > 2022-03-28 19:32:08,931 INFO [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-1:null) (logid:21246b44) Storage pool
> > e7f12bcd-9fae-388d-bbbe-cf0984e7776d has no corr esponding secret.
> > Not removing any secret.
> >
>
> > Checking the image in question with qemu-img info and is indeed a qcow2
> > image.
> > image: /san/primary/vm_vol1/2dbd1ee3-9ae7-4e0c-b52e-c9767ea33208
> > file format: qcow2
> > virtual size: 80 GiB (85899345920 bytes)
> > disk size: 28 GiB
> > cluster_size: 65536
> > Format specific information:
> > compat: 1.1
> > lazy refcounts: false
> > refcount bits: 16
> > corrupt: false
> >
>
> > Online migrations with volumes also fail. The new volumes remain in
> > "Migrating" state for ever and the job fails.
> >
>
> > The environment is ubuntu 20.04.
> >
>
> > --
> > p4nd0ra - the curious



-- 
p4nd0ra - the curious

Reply via email to