Re: [ovirt-users] ?==?utf-8?q? VMs with multiple vdisks don't migrate
Hi, Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), while the engine.log in the first mail on 2018-02-12 was for VMs standing on victor, migrated (or failed to migrate...) to ginger. Symptoms were exactly the same, in both directions, and VMs works like a charm before, and even after (migration "killed" by a poweroff of VMs). Am I the only one experimenting this problem ? Thanks -- Cordialement, Frank Soyer Le Jeudi, Février 22, 2018 00:45 CET, Maor Lipchuka écrit: Hi Frank, Sorry about the delay repond.I've been going through the logs you attached, although I could not find any specific indication why the migration failed because of the disk you were mentionning.Does this VM run with both disks on the target host without migration? Regards,Maor On Fri, Feb 16, 2018 at 11:03 AM, fsoyer wrote:Hi Maor, sorry for the double post, I've change the email adress of my account and supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to an existing VM : it no more migrates, needing to poweroff it after minutes. Then simply deleting the second disk makes migrate it in exactly 9s without problem ! https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d -- Cordialement, Frank Soyer Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk a écrit: Hi Frank, I already replied on your last email.Can you provide the VDSM logs from the time of the migration failure for both hosts: ginger.local.systea.fr and victor.local.systea.fr Thanks,Maor On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem ! So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :( In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO
Re: [ovirt-users] ?==?utf-8?q? VMs with multiple vdisks don't migrate
Hi Maor, sorry for the double post, I've change the email adress of my account and supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to an existing VM : it no more migrates, needing to poweroff it after minutes. Then simply deleting the second disk makes migrate it in exactly 9s without problem ! https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d -- Cordialement, Frank Soyer Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuka écrit: Hi Frank, I already replied on your last email.Can you provide the VDSM logs from the time of the migration failure for both hosts: ginger.local.systea.fr and victor.local.systea.fr Thanks,Maor On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem ! So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :( In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6,