great!
this is an old bug that should have been fixed so I think you are using older versions of vdsm/libvirt and qemu.


On 06/20/2014 02:45 PM, Alexandr Krivulya wrote:
Thanks, detaching CD solves this problem.

20.06.2014 16:34, Dafna Ron пишет:
vm qemu vm log shows an error on dst:

qemu: warning: error while loading state section id 3
load of migration failed

but I think that the issue is that the vm has a cd attached which no
longer exists or available to the vm.

can you please try to detach any disk attached, activate the iso
domain if it's down and try migrating again.

here is the error from the vdsm log:

Thread-153::DEBUG::2014-06-20
14:49:06,162::task::974::TaskManager.Task::(_decref)
Task=`b37186f5-7959-495b-b9e3-816c2d3418ac`::ref 0 aborting False
libvirtEventLoop::DEBUG::2014-06-20
14:49:06,367::vm::4846::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`87c108fa-1ade-47a4-be66-f0416752eec4`::event Stopped detail 5
opaque None
libvirtEventLoop::INFO::2014-06-20
14:49:06,368::vm::2169::vm.Vm::(_onQemuDeath)
vmId=`87c108fa-1ade-47a4-be66-f0416752eec4`::underlying process
disconnected
libvirtEventLoop::INFO::2014-06-20
14:49:06,368::vm::4326::vm.Vm::(releaseVm)
vmId=`87c108fa-1ade-47a4-be66-f0416752eec4`::Release VM resources
Thread-65::DEBUG::2014-06-20
14:49:06,396::libvirtconnection::108::libvirtconnection::(wrapper)
Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not
found: no domain with matching uuid '87c108fa-1ade-47a4-be66-f0416752e
ec4'
libvirtEventLoop::WARNING::2014-06-20
14:49:06,394::clientIF::365::vds::(teardownVolumePath) Drive is not a
vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2
VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound met
hod Drive._checkIoTuneCategories of <vm.Drive object at
0x7ff3cc072cd0>> _customize:<bound method Drive._customize of
<vm.Drive object at 0x7ff3cc072cd0>> _deviceXML:<disk device="cdrom"
type="file">
       <driver name="qemu" type="raw"/>
       <source startupPolicy="optional"/>
       <target bus="ide" dev="hdc"/>
       <readonly/>
       <serial/>
       <alias name="ide0-1-0"/>
       <address bus="1" controller="0" target="0" type="drive" unit="0"/>
     </disk> _makeName:<bound method Drive._makeName of <vm.Drive
object at 0x7ff3cc072cd0>> _setExtSharedState:<bound method
Drive._setExtSharedState of <vm.Drive object at 0x7ff3cc072cd0>>
_validateIoTuneParams:<bound method Drive._val
idateIoTuneParams of <vm.Drive object at 0x7ff3cc072cd0>>
address:{'bus': '1', 'controller': '0', 'type': 'drive', 'target':
'0', 'unit': '0'} alias:ide0-1-0 apparentsize:0 blockDev:False
cache:none conf:{'guestFQDN': '', 'acpiEnable': 'true',
'emulatedMachine': 'rhel6.4.0', 'afterMigrationStatus': '',
'tabletEnable': 'true', 'pid': '0', 'memGuaranteedSize': 1024,
'spiceSslCipherSuite': 'DEFAULT', 'displaySecurePort': '-1',
'timeOffset': '10801', 'cpuType': 'Penryn', 'custom': {}, 'pauseCode':
'NOERR', 'migrationDest': 'libvirt', 'smp': '2', 'vmType': 'kvm',
'memSize': 2048, 'smpCoresPerSocket': '1', 'vmName': 'z-store.lis.ua',
'nice': '0', 'username': 'Unknown', 'clientIp': '', 'vmId':
'87c108fa-1ade-47a4-be66-f0416752eec4', 'displayIp': '0',
'displayPort': '-1', 'smartcardEnable': 'false',
'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
'nicModel': 'rtl8139,pv', 'keyboardLayout': 'en-us', 'kvmEnable':
'true', 'transparentHugePages': 'true', 'devices': [{'device': 'unix',
'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0',
'controller': '0', 'type': 'virtio-serial', 'port': '1'}}, {'device':
'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus':
'0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}},
{'device': 'usb', 'alias': 'usb0', 'type': 'controller', 'address':
{'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci',
'function': '0x2'}}, {'device': 'ide', 'alias': 'ide0', 'type':
'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain':
'0x0000', 'type': 'pci', 'function': '0x1'}}, {'device':
'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller',
'address': {'slot': '0x04', 'bus': '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}}, {'specParams': {'vram': '32768', 'heads':
'1'}, 'alias': 'video0', 'deviceId':
'568266ff-9e6c-4ac2-9dff-4ac298db00ca', 'address': {'slot': '0x02',
'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'},
'device': 'cirrus', 'type': 'video'}, {'nicModel': 'pv', 'macAddr':
'00:1a:4a:51:89:a6', 'linkActive': True, 'network': 'ovirtmgmt',
'specParams': {}, 'filter': 'vdsm-no-mac-spoofing', 'alias': 'net0',
'deviceId': '395be56a-2e10-405d-baed-dec5c5186a83', 'address':
{'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci',
'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name':
'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias':
'ide0-1-0', 'specParams': {'path': ''}, 'readonly': 'True',
'deviceId': 'a65fa707-1cc3-4960-b6e3-6aa7ca124e48', 'address': {'bus':
'1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'},
'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'},
{'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000',
'type': 'pci', 'function': '0x0'}, 'reqsize': '0', 'index': 0,
'iface': 'virtio', 'apparentsize': '32212254720', 'specParams': {},
'imageID': '35499e90-dc09-4caa-9537-afdde82223ca', 'readonly':
'False', 'shared': 'false', 'truesize': '9280925696', 'type': 'disk',
'domainID': '83c4e59a-d810-4965-b7c0-ac2839b709f8', 'volumeInfo':
{'domainID': '83c4e59a-d810-4965-b7c0-ac2839b709f8', 'volType':
'path', 'leaseOffset': 0, 'path':
'/rhev/data-center/mnt/glusterSD/127.0.0.1:VM__Storage/83c4e59a-d810-4965-b7c0-ac2839b709f8/images/35499e90-dc09-4caa-9537-afdde82223ca/e9abc7ad-57dc-4636-a78c-149893a121a2',
'volumeID': 'e9abc7ad-57dc-4636-a78c-149893a121a2', 'leasePath':
'/rhev/data-center/mnt/glusterSD/127.0.0.1:VM__Storage/83c4e59a-d810-4965-b7c0-ac2839b709f8/images/35499e90-dc09-4caa-9537-afdde82223ca/e9abc7ad-57dc-4636-a78c-149893a121a2.lease',
'imageID': '35499e90-dc09-4caa-9537-afdde82223ca'}, 'format': 'raw',
'deviceId': '35499e90-dc09-4caa-9537-afdde82223ca', 'poolID':
'5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': '/rhev/

Dafna






On 06/20/2014 02:10 PM, Alexandr Krivulya wrote:
Attached, thank you

20.06.2014 15:15, Dafna Ron пишет:
you might have a problem and the migration got stuck - increasing
timeout will not solve anything
please attach both src and dst vdsm, libvirt and vm qemu logs + engine
log

Thanks,
Dafna


On 06/20/2014 01:00 PM, Alexandr Krivulya wrote:
Hi!
How can I adjust migration timeout? I see this error in my vdsm.log
when
I try to migrate one of my VM's:

The migration took 130 seconds which is exceeding the configured
maximum
time for migrations of 128 seconds. The migration will be aborted.
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to