Re: [ovirt-users] VM with multiple vdisks can't migrate

2018-02-14 Thread Maor Lipchuk
Hi Frank,

Can you please attach the VDSM logs from the time of the migration failure
for both hosts:
  ginger.local.systea.f r and v
ictor.local.systea.fr

Thanks,
Maor

On Tue, Feb 13, 2018 at 12:07 PM, fsoyer  wrote:

> Hi all,
> I discovered yesterday a problem when migrating VM with more than one
> vdisk.
> On our test servers (oVirt4.1, shared storage with Gluster), I created 2
> VMs needed for a test, from a template with a 20G vdisk. On this VMs I
> added a 100G vdisk (for this tests I didn't want to waste time to extend
> the existing vdisks... But I lost time finally...). The VMs with the 2
> vdisks works well.
> Now I saw some updates waiting on the host. I tried to put it in
> maintenance... But it stopped on the two VM. They were marked "migrating",
> but no more accessible. Other (small) VMs with only 1 vdisk was migrated
> without problem at the same time.
> I saw that a kvm process for the (big) VMs was launched on the source AND
> destination host, but after tens of minutes, the migration and the VMs was
> always freezed. I tried to cancel the migration for the VMs : failed. The
> only way to stop it was to poweroff the VMs : the kvm process died on the 2
> hosts and the GUI alerted on a failed migration.
> In doubt, I tried to delete the second vdisk on one of this VMs : it
> migrates then without error ! And no access problem.
> I tried to extend the first vdisk of the second VM, the delete the second
> vdisk : it migrates now without problem !
>
> So after another test with a VM with 2 vdisks, I can say that this blocked
> the migration process :(
>
> In engine.log, for a VMs with 1 vdisk migrating well, we see :
>
> 2018-02-12 16:46:29,705+01 INFO  
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to
> object 
> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]',
> sharedLocks=''}'
> 2018-02-12 16:46:29,955+01 INFO  
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
> Running command: MigrateVmToServerCommand internal: false. Entities
> affected :  ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group
> MIGRATE_VM with role type USER
> 2018-02-12 16:46:30,261+01 INFO  
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true',
> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6',
> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='
> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false',
> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
> maxIncomingMigrations='2', maxOutgoingMigrations='2',
> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
> action={name=setDowntime, params=[200]}}, {limit=3,
> action={name=setDowntime, params=[300]}}, {limit=4,
> action={name=setDowntime, params=[400]}}, {limit=6,
> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
> params=[]}}]]'}), log id: 14f61ee0
> 2018-02-12 16:46:30,262+01 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
> (org.ovirt.thread.pool-6-thread-32)
> [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName
> = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true',
> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6',
> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='
> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false',
> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
> maxIncomingMigrations='2', maxOutgoingMigrations='2',
> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
> action={name=setDowntime, params=[200]}}, {limit=3,
> action={name=setDowntime, params=[300]}}, {limit=4,
> action={name=setDowntime, params=[400]}}, {limit=6,
> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
> params=[]}}]]'}), log id: 775cd381
> 2018-02-12 16:46:30,277+01 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
> (org.ovirt.thread.pool-6-thread-32)
> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand,
> log id: 775cd381
> 2018-02-12 16:46:30,285+01 INFO  
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> (org.ovirt.thread.pool-6-thread-32) 

[ovirt-users] VM with multiple vdisks can't migrate

2018-02-13 Thread fsoyer

Hi all,
I discovered yesterday a problem when migrating VM with more than one vdisk.
On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs 
needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G 
vdisk (for this tests I didn't want to waste time to extend the existing 
vdisks... But I lost time finally...). The VMs with the 2 vdisks works well.
Now I saw some updates waiting on the host. I tried to put it in maintenance... 
But it stopped on the two VM. They were marked "migrating", but no more 
accessible. Other (small) VMs with only 1 vdisk was migrated without problem at 
the same time.
I saw that a kvm process for the (big) VMs was launched on the source AND 
destination host, but after tens of minutes, the migration and the VMs was 
always freezed. I tried to cancel the migration for the VMs : failed. The only 
way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts 
and the GUI alerted on a failed migration.
In doubt, I tried to delete the second vdisk on one of this VMs : it migrates 
then without error ! And no access problem.
I tried to extend the first vdisk of the second VM, the delete the second vdisk 
: it migrates now without problem !   

So after another test with a VM with 2 vdisks, I can say that this blocked the 
migration process :(

In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-12 
16:46:29,705+01 INFO  [org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to 
object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', 
sharedLocks=''}'
2018-02-12 16:46:29,955+01 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
Running command: MigrateVmToServerCommand internal: false. Entities affected :  
ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with 
role type USER
2018-02-12 16:46:30,261+01 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', 
hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', 
vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', 
dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', 
migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', 
autoConverge='true', migrateCompressed='false', consoleAddress='null', 
maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, 
params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, 
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3, 
action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, 
params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, 
action={name=abort, params=[]}}]]'}), log id: 14f61ee0
2018-02-12 16:46:30,262+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, 
MigrateVDSCommandParameters:{runAsync='true', 
hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', 
vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', 
dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', 
migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', 
autoConverge='true', migrateCompressed='false', consoleAddress='null', 
maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, 
params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, 
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3, 
action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, 
params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, 
action={name=abort, params=[]}}]]'}), log id: 775cd381
2018-02-12 16:46:30,277+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
FINISH, MigrateBrokerVDSCommand, log id: 775cd381
2018-02-12 16:46:30,285+01 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0
2018-02-12 16:46:30,301+01 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] 
EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 
2f712024-5982-46a8-82c8-fd8293da5725, Job ID: