Hi again,

Tried this using a different node and disk, with same result:
transferred: 53687091200 bytes remaining: 0 bytes total: 53687091200 bytes progression: 100.00 % TASK ERROR: storage migration failed: mirroring error: VM 100 qmp command 'block-job-complete' failed - The active block job for device 'drive-virtio0' cannot be completed

It is the same guest on different VM instance.

I'm using nfs-kernel-server from Debian for NFS service, with the following /etc/exports /srv/nfs2 192.168.4.91(rw,sync,no_subtree_check) 192.168.4.92(rw,sync,no_subtree_check) 192.168.4.93(rw,sync,no_subtree_check)

root@pmx2:/etc# pveversion -v
proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-33-pve)
pve-manager: 3.3-2 (running version: 3.3-2/995e687e)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-33-pve: 2.6.32-138
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-35
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-9
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Any hint? :)

On 08/10/14 16:50, Eneko Lacunza wrote:
Hi,

I have tried to live-migrate storage from local storage in another node in the same cluster, to the same NFS export, and it doesn't work either:

create full clone of drive virtio0 (local:103/vm-103-disk-1.raw)
trying to aquire cfs lock 'storage-nfs1' ... OK
Formatting '/mnt/pve/nfs1/images/103/vm-103-disk-1.raw', fmt=raw size=53687091200 transferred: 0 bytes remaining: 53687091200 bytes total: 53687091200 bytes progression: 0.00 % transferred: 41943040 bytes remaining: 53645148160 bytes total: 53687091200 bytes progression: 0.08 % transferred: 83886080 bytes remaining: 53603205120 bytes total: 53687091200 bytes progression: 0.16 % transferred: 136314880 bytes remaining: 53550776320 bytes total: 53687091200 bytes progression: 0.25 % transferred: 188743680 bytes remaining: 53498347520 bytes total: 53687091200 bytes progression: 0.35 %
[...]
transferred: 53606481920 bytes remaining: 80609280 bytes total: 53687091200 bytes progression: 99.85 % transferred: 53648424960 bytes remaining: 38666240 bytes total: 53687091200 bytes progression: 99.93 % transferred: 53680340992 bytes remaining: 6750208 bytes total: 53687091200 bytes progression: 99.99 % transferred: 53687091200 bytes remaining: 0 bytes total: 53687091200 bytes progression: 100.00 % TASK ERROR: storage migration failed: mirroring error: VM 103 qmp command 'block-job-complete' failed - The active block job for device 'drive-virtio0' cannot be completed
---

VM has latest virtio drivers for Windows (81)

-------- Forwarded Message --------
Subject:        Storage migration
Date:   Wed, 08 Oct 2014 14:17:42 +0200
From:   Eneko Lacunza <elacu...@binovo.es>
To:     pve-user@pve.proxmox.com



Hi all,

I've found some problems with storage migration with the recent 3.3
versión, vanilla as it is in ISO and also after updating from
pve-non-subscription.

I have a WS2012R2 with virtio block device in local storage. Local
storage (var/lib/vz) and NFS storage (srv/nfs) are on the same machine.
- Moving disk to a NFS shared storage with VM off works OK.
- Moving disk to a NFS shared storage with VM on doesn't work. First try
reports:
---
create full clone of drive virtio0 (local:100/vm-100-disk-1.raw)
Formatting '/mnt/pve/nfs1/images/100/vm-100-disk-2.raw', fmt=raw size=0
transferred: 0 bytes remaining: 53687091200 bytes total: 53687091200
bytes progression: 0.00 %
TASK ERROR: storage migration failed: mirroring error: mirroring job
seem to have die. Maybe do you have bad sectors? at
/usr/share/perl5/PVE/QemuServer.pm line 5170.
---
After re-trying:
---
create full clone of drive virtio0 (local:100/vm-100-disk-1.raw)
Formatting '/mnt/pve/nfs1/images/100/vm-100-disk-2.raw', fmt=raw
size=53687091200
transferred: 0 bytes remaining: 53687091200 bytes total: 53687091200
bytes progression: 0.00 %
transferred: 104857600 bytes remaining: 53582233600 bytes total:
53687091200 bytes progression: 0.20 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
transferred: 125829120 bytes remaining: 53561262080 bytes total:
53687091200 bytes progression: 0.23 %
[...]
---
Usually the % is different, but very low (<1%)

There are no hard disk errors in dmesg/syslog.

I have also found apparently slightly different issues migrating from
NFS to RBD, and from RBD to NFS.

Cheers
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
       943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es





--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
      943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to