[ovirt-users] Migration failed after upgrade engine from 4.3 to 4.4

2023-05-15 Thread Emmanuel Ferrandi

Hi !

When I try to migrate a powered VM (regardless of OS) from one 
hypervisor to another, the VM is immediately shut down with this error 
message:


   /Migration failed: Admin shut down from the engine (VM: VM, Source:
   HP11)./

The oVirt engine has been upgraded from version 4.3 to version 4.4.
Some nodes are in version 4.3 and others in version 4.4.

Here are the oVirt versions for selected hypervisors:

 * HP11 : 4.4
 * HP5 : 4.4
 * HP6 : 4.3

Here are the migration attempts I tried with a powered VM :

 *  From HP > to HP
 * HP6 > HP5 : OK
 * HP6 > HP11 : OK
 * HP5 > HP11 : OK
 * HP5 > HP6 : OK
 * HP11 > HP5 : *NOK*
 * HP11 > HP6 : OK

As mentioned above the migration of a VM between two versions of ovirt 
is not a problem.
The migration of the VM between two HPs with the same 4.4 version works 
only in one direction (HP5 to HP11) and doesn't work in the other way.


I already tried to reinstall both HPs in version 4.4 but without success.

Here are the logs on the HP5 concerning the VM:

   //var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
   (jsonrpc/3) [api.virt] START destroy(gracefulAttempts=1)
   from=:::172.20.3.250,37534, flow_id=43364065,
   vmId=d14f75cd-1cb1-440b-9780-6b6ee78149ac (api:48)//
   ///var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
   (jsonrpc/3) [api] FINISH destroy error=Virtual machine does not
   exist: {'vmId': 'd14f75cd-1cb1-440b-9780-6b6ee78149ac'} (api:129)//
   ///var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
   (jsonrpc/3) [api.virt] FINISH destroy return={'status': {'code': 1,
   'message': "Virtual machine does not exist: {'vmId':
   'd14f75cd-1cb1-440b-9780-6b6ee78149ac'}"}}
   from=:::172.20.3.250,37534, flow_id=43364065,
   vmId=d14f75cd-1cb1-440b-9780-6b6ee78149ac (api:54)/

   //var/log/libvirt/qemu/VM.log:2023-03-24 14:56:51.474+:
   initiating migration//
   ///var/log/libvirt/qemu/VM.log:2023-03-24 14:56:54.342+:
   shutting down, reason=migrated//
   ///var/log/libvirt/qemu/VM.log:2023-03-24T14:56:54.870528Z qemu-kvm:
   terminating on signal 15 from pid 4379 ()/

Here are the log on the engine concerning the VM:

   //

   //var/log/ovirt-engine/engine.log:2023-05-11 14:32:53,333+02 INFO 
   [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default
   task-18197) [3f672d7f-f617-47a2-b0e9-c521656e8c01] START,
   MigrateVDSCommand(
   MigrateVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
   vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', srcHost='HP11',
   dstVdsId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
   dstHost='HP5:54321', migrationMethod='ONLINE',
   tunnelMigration='false', migrationDowntime='0', autoConverge='true',
   migrateCompressed='false', migrateEncrypted='null',
   consoleAddress='null', maxBandwidth='256', enableGuestEvents='true',
   maxIncomingMigrations='2', maxOutgoingMigrations='2',
   convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
   stalling=[{limit=1, action={name=setDowntime, params=[150]}},
   {limit=2, action={name=setDowntime, params=[200]}}, {limit=3,
   action={name=setDowntime, params=[300]}}, {limit=4,
   action={name=setDowntime, params=[400]}}, {limit=6,
   action={name=setDowntime, params=[500]}}, {limit=-1,
   action={name=abort, params=[]}}]]', dstQemu='192.168.1.1'}), log id:
   6a3507d0//
   ///var/log/ovirt-engine/engine.log:2023-05-11 14:32:53,334+02 INFO
   [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
   (default task-18197) [3f672d7f-f617-47a2-b0e9-c521656e8c01] START,
   MigrateBrokerVDSCommand(HostName = HP11,
   MigrateVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
   vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', srcHost='HP11',
   dstVdsId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
   dstHost='HP5:54321', migrationMethod='ONLINE',
   tunnelMigration='false', migrationDowntime='0', autoConverge='true',
   migrateCompressed='false', migrateEncrypted='null',
   consoleAddress='null', maxBandwidth='256', enableGuestEvents='true',
   maxIncomingMigrations='2', maxOutgoingMigrations='2',
   convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
   stalling=[{limit=1, action={name=setDowntime, params=[150]}},
   {limit=2, action={name=setDowntime, params=[200]}}, {limit=3,
   action={name=setDowntime, params=[300]}}, {limit=4,
   action={name=setDowntime, params=[400]}}, {limit=6,
   action={name=setDowntime, params=[500]}}, {limit=-1,
   action={name=abort, params=[]}}]]', dstQemu='192.168.1.1'}), log id:
   f254f72//
   ///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,246+02 INFO 
   [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
   (ForkJoinPool-1-worker-9) [3f0e966d] VM
   'd14f75cd-1cb1-440b-9780-6b6ee78149ac' was reported as Down on VDS
   '6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)//
   ///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,296+02 INFO 
   [org.ovirt.engine.core.bll.SaveVmExternalDataCommand]
   (ForkJoinPool-1-worker-9) [43364065] Running 

[ovirt-users] Migration failed due to an Error: Fatal error during migration

2023-02-12 Thread Anthony Bustillos Gonzalez
Hello, 

I have this issue, when I tried to migrate one VM to another host.

"Migration failed due to an Error: Fatal error during migration "

OS: VERSION=Oracle Linux "8.7"
Qemu: KVM 6.1.1

log 

2023-02-09 11:50:38,991-06 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-264062) [757bd45d] EVENT_ID: 
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed due to an Error: Fatal 
error during migration (VM: Coopavg-BDDESA01, Source: Coopavg-Moscow, 
Destination: Coopavg-Berlin).
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2U7NTEQPB7MODHGD7AGDPP4HJVFFAABO/


[ovirt-users] Migration failed due to an Error: Fatal error during migration

2022-01-24 Thread Gunasekhar Kothapalli via Users

I am able to power on vms on newly upgraded host, But not able to migrate VMs 
from other
hosts to new host or newly upgraded hosts to other hosts. This was worked fine 
before
upgraded.

Host logs
==
Unable to read from monitor: Connection reset by peer
internal error: qemu unexpectedly closed the monitor: 
2022-01-24T17:51:46.598571Z
qemu-kvm: get_pci_config_device: Bad config >
2022-01-24T17:51:46.598627Z qemu-kvm: Failed to load PCIDevice:config
2022-01-24T17:51:46.598635Z qemu-kvm: Failed to load
pcie-root-port:parent_obj.parent_obj.parent_obj
2022-01-24T17:51:46.598642Z qemu-kvm: error while loading state for instance 
0x0 of device
':00:02.0/pcie-root-port'
2022-01-24T17:51:46.598830Z qemu-kvm: load of migration failed: Invalid argument
Guest agent is not responding: QEMU guest agent is not connected
Guest agent is not responding: QEMU guest agent is not connected
Guest agent is not responding: QEMU guest agent is not connected

Engine Logs
===

2022-01-24 11:31:25,080-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-21) [] Adding VM 
'9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) to re-run list
2022-01-24 11:31:25,099-07 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-2331914) [] EVENT_ID: 
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed due to an Error: Fatal 
error during migration (VM: zzz2019, Source: lcoskvmp07.cos.is.keysight.com, 
Destination: lcoskvmp03.cos.is.keysight.com).
2022-01-24 18:39:47,897-07 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-771) [a78e85d4-068a-41c1-a8fa-b3acd8c69317] EVENT_ID: 
VM_MIGRATION_START(62), Migration started (VM: zzz2019, Source: 
lcoskvmp07.cos.is.keysight.com, Destination: lcoskvmp03.cos.is.keysight.com, 
User: k.gunasek...@non.keysight.com@KEYSIGHT).
2022-01-24 18:40:01,417-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-27) [] VM 
'9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) was unexpectedly detected as 
'Down' on VDS 
'ee23b44d-976d-4889-8769-59b56e4b23c0'(lcoskvmp03.cos.is.keysight.com) 
(expected on '0d58953f-b3cc-4bac-b3b2-08ba1bca')
2022-01-24 18:40:01,589-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-27) [] VM 
'9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) was unexpectedly detected as 
'Down' on VDS 
'ee23b44d-976d-4889-8769-59b56e4b23c0'(lcoskvmp03.cos.is.keysight.com) 
(expected on '0d58953f-b3cc-4bac-b3b2-08ba1bca')
2022-01-24 18:40:01,589-07 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-27) [] Migration of VM 'zzz2019' to host 
'lcoskvmp03.cos.is.keysight.com' failed: VM destroyed during the startup.
2022-01-24 18:40:01,591-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-17) [] VM 
'9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) moved from 'MigratingFrom' --> 
'Up'
2022-01-24 18:40:01,591-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-17) [] Adding VM 
'9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) to re-run list
2022-01-24 18:40:01,611-07 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-2348837) [] EVENT_ID: 
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed due to an Error: Fatal 
error during migration (VM: zzz2019, Source: lcoskvmp07.cos.is.keysight.com, 
Destination: lcoskvmp03.cos.is.keysight.com).
[root@lcosovirt02 ovirt-engine]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2W5OEBPZM3TLS3P6DXPWWKNU5SPPEGAD/


[ovirt-users] migration failed

2019-01-01 Thread 董青龙
Hi all,
I have an ovirt4.2 environment of 3 hosts. Now all vms in this 
environment could not be migrated. But all the vms could be started on all 3 
hosts. Anyone can help? Thanks a lot!


engine logs:


2019-01-02 09:41:26,868+08 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand] 
(default task-9) [3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[eff7f697-8a07-46e5-a631-a1011a0eb836=VM]', 
sharedLocks=''}'
2019-01-02 09:41:26,978+08 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] Running command: MigrateVmCommand 
internal: false. Entities affected :  ID: eff7f697-8a07-46e5-a631-a1011a0eb836 
Type: VMAction group MIGRATE_VM with role type USER
2019-01-02 09:41:27,019+08 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] START, MigrateVDSCommand( 
MigrateVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e', 
vmId='eff7f697-8a07-46e5-a631-a1011a0eb836', srcHost='horeb66', 
dstVdsId='5bb18f6e-9c7e-4afd-92de-f6482bf752e5', dstHost='horeb65:54321', 
migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', 
autoConverge='true', migrateCompressed='false', consoleAddress='null', 
maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, 
params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, 
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3, 
action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, 
params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, 
action={name=abort, params=[]}}]]', dstQemu='192.168.128.78'}), log id: 1bd72db2
2019-01-02 09:41:27,019+08 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] START, MigrateBrokerVDSCommand(HostName 
= horeb66, 
MigrateVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e', 
vmId='eff7f697-8a07-46e5-a631-a1011a0eb836', srcHost='horeb66', 
dstVdsId='5bb18f6e-9c7e-4afd-92de-f6482bf752e5', dstHost='horeb65:54321', 
migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', 
autoConverge='true', migrateCompressed='false', consoleAddress='null', 
maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, 
params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, 
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3, 
action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, 
params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, 
action={name=abort, params=[]}}]]', dstQemu='192.168.128.78'}), log id: 380b8d38
2019-01-02 09:41:27,025+08 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] FINISH, MigrateBrokerVDSCommand, log id: 
380b8d38
2019-01-02 09:41:27,029+08 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] FINISH, MigrateVDSCommand, return: 
MigratingFrom, log id: 1bd72db2
2019-01-02 09:41:27,036+08 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] EVENT_ID: VM_MIGRATION_START(62), 
Migration started (VM: win7, Source: horeb66, Destination: horeb65, User: 
admin@internal-authz). 
2019-01-02 09:41:41,557+08 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] VM 
'eff7f697-8a07-46e5-a631-a1011a0eb836'(win7) moved from 'MigratingFrom' --> 'Up'
2019-01-02 09:41:41,557+08 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] Adding VM 
'eff7f697-8a07-46e5-a631-a1011a0eb836'(win7) to re-run list
2019-01-02 09:41:41,567+08 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] Rerun VM 
'eff7f697-8a07-46e5-a631-a1011a0eb836'. Called from VDS 'horeb66'
2019-01-02 09:41:41,570+08 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-168945) [] START, 
MigrateStatusVDSCommand(HostName = horeb66, 
MigrateStatusVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e',
 vmId='eff7f697-8a07-46e5-a631-a1011a0eb836'}), log id: 4ed2923c
2019-01-02 09:41:41,573+08 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 

Re: [ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'

2017-03-29 Thread Dan Kenigsberg
Thanks for following up on this. We need to put a little more effort on

Bug 1400952 - [RFE] Resolve listen IP for graphics attached to Open
vSwitch network

so that the hook is no longer needed.

Please let us know how oVirt+OvS is working for you!


On Wed, Mar 29, 2017 at 6:17 PM, Devin A. Bougie
 wrote:
> Just incase anyone else runs into this, you need to set 
> "migration_ovs_hook_enabled=True" in vdsm.conf.  It seems the vdsm.conf 
> created by "hosted-engine --deploy" did not list all of the options, so I 
> overlooked this one.
>
> Thanks for all the help,
> Devin
>
> On Mar 27, 2017, at 11:10 AM, Devin A. Bougie  
> wrote:
>> Hi, All.  We have a new oVirt 4.1.1 cluster up with the OVS switch type.  
>> Everything seems to be working great, except for live migration.
>>
>> I believe the red flag in vdsm.log on the source is:
>> Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)
>>
>> Which results from vdsm assigning an arbitrary bridge name to each ovs 
>> bridge.
>>
>> Please see below for more details on the bridges and excerpts from the logs. 
>>  Any help would be greatly appreciated.
>>
>> Many thanks,
>> Devin
>>
>> SOURCE OVS BRIDGES:
>> # ovs-vsctl show
>> 6d96d9a5-e30d-455b-90c7-9e9632574695
>>Bridge "vdsmbr_QwORbsw2"
>>Port "vdsmbr_QwORbsw2"
>>Interface "vdsmbr_QwORbsw2"
>>type: internal
>>Port "vnet0"
>>Interface "vnet0"
>>Port classepublic
>>Interface classepublic
>>type: internal
>>Port "ens1f0"
>>Interface "ens1f0"
>>Bridge "vdsmbr_9P7ZYKWn"
>>Port ovirtmgmt
>>Interface ovirtmgmt
>>type: internal
>>Port "ens1f1"
>>Interface "ens1f1"
>>Port "vdsmbr_9P7ZYKWn"
>>Interface "vdsmbr_9P7ZYKWn"
>>type: internal
>>ovs_version: "2.7.0"
>>
>> DESTINATION OVS BRIDGES:
>> # ovs-vsctl show
>> f66d765d-712a-4c81-b18e-da1acc9cfdde
>>Bridge "vdsmbr_vdpp0dOd"
>>Port "vdsmbr_vdpp0dOd"
>>Interface "vdsmbr_vdpp0dOd"
>>type: internal
>>Port "ens1f0"
>>Interface "ens1f0"
>>Port classepublic
>>Interface classepublic
>>type: internal
>>Bridge "vdsmbr_3sEwEKd1"
>>Port "vdsmbr_3sEwEKd1"
>>Interface "vdsmbr_3sEwEKd1"
>>type: internal
>>Port "ens1f1"
>>Interface "ens1f1"
>>Port ovirtmgmt
>>Interface ovirtmgmt
>>type: internal
>>ovs_version: "2.7.0"
>>
>>
>> SOURCE VDSM LOG:
>> ...
>> 2017-03-27 10:57:02,567-0400 INFO  (jsonrpc/1) [vdsm.api] START migrate 
>> args=(, {u'incomingLimit': 2, u'src': 
>> u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', 
>> u'tunneled': u'false', u'enableGuestEvents': False, u'dst': 
>> u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': 
>> u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', 
>> u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, 
>> u'method': u'online', 'mode': 'remote'}) kwargs={} (api:37)
>> 2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [vdsm.api] FINISH migrate 
>> return={'status': {'message': 'Migration in progress', 'code': 0}, 
>> 'progress': 0} (api:43)
>> 2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC 
>> call VM.migrate succeeded in 0.01 seconds (__init__:515)
>> 2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
>> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM 
>> took: 0 seconds (migration:455)
>> 2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
>> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to 
>> qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri 
>> tcp://192.168.55.81 (migration:480)
>> 2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
>> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on 
>> 'vdsmbr_QwORbsw2': No such device (migration:287)
>> 2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
>> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate 
>> (migration:429)
>> Traceback (most recent call last):
>>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, 
>> in run
>>self._startUnderlyingMigration(time.time())
>>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, 
>> in _startUnderlyingMigration
>>self._perform_with_downtime_thread(duri, muri)
>>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, 
>> in _perform_with_downtime_thread
>>self._perform_migration(duri, muri)
>>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, 
>> in _perform_migration
>>self._vm._dom.migrateToURI3(duri, params, flags)
>>  File 

Re: [ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'

2017-03-29 Thread Devin A. Bougie
Just incase anyone else runs into this, you need to set 
"migration_ovs_hook_enabled=True" in vdsm.conf.  It seems the vdsm.conf created 
by "hosted-engine --deploy" did not list all of the options, so I overlooked 
this one.

Thanks for all the help,
Devin

On Mar 27, 2017, at 11:10 AM, Devin A. Bougie  wrote:
> Hi, All.  We have a new oVirt 4.1.1 cluster up with the OVS switch type.  
> Everything seems to be working great, except for live migration.
> 
> I believe the red flag in vdsm.log on the source is:
> Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)
> 
> Which results from vdsm assigning an arbitrary bridge name to each ovs bridge.
> 
> Please see below for more details on the bridges and excerpts from the logs.  
> Any help would be greatly appreciated.
> 
> Many thanks,
> Devin
> 
> SOURCE OVS BRIDGES:
> # ovs-vsctl show
> 6d96d9a5-e30d-455b-90c7-9e9632574695
>Bridge "vdsmbr_QwORbsw2"
>Port "vdsmbr_QwORbsw2"
>Interface "vdsmbr_QwORbsw2"
>type: internal
>Port "vnet0"
>Interface "vnet0"
>Port classepublic
>Interface classepublic
>type: internal
>Port "ens1f0"
>Interface "ens1f0"
>Bridge "vdsmbr_9P7ZYKWn"
>Port ovirtmgmt
>Interface ovirtmgmt
>type: internal
>Port "ens1f1"
>Interface "ens1f1"
>Port "vdsmbr_9P7ZYKWn"
>Interface "vdsmbr_9P7ZYKWn"
>type: internal
>ovs_version: "2.7.0"
> 
> DESTINATION OVS BRIDGES:
> # ovs-vsctl show
> f66d765d-712a-4c81-b18e-da1acc9cfdde
>Bridge "vdsmbr_vdpp0dOd"
>Port "vdsmbr_vdpp0dOd"
>Interface "vdsmbr_vdpp0dOd"
>type: internal
>Port "ens1f0"
>Interface "ens1f0"
>Port classepublic
>Interface classepublic
>type: internal
>Bridge "vdsmbr_3sEwEKd1"
>Port "vdsmbr_3sEwEKd1"
>Interface "vdsmbr_3sEwEKd1"
>type: internal
>Port "ens1f1"
>Interface "ens1f1"
>Port ovirtmgmt
>Interface ovirtmgmt
>type: internal
>ovs_version: "2.7.0"
> 
> 
> SOURCE VDSM LOG:
> ...
> 2017-03-27 10:57:02,567-0400 INFO  (jsonrpc/1) [vdsm.api] START migrate 
> args=(, {u'incomingLimit': 2, u'src': 
> u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', 
> u'tunneled': u'false', u'enableGuestEvents': False, u'dst': 
> u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': 
> u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', 
> u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, 
> u'method': u'online', 'mode': 'remote'}) kwargs={} (api:37)
> 2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [vdsm.api] FINISH migrate 
> return={'status': {'message': 'Migration in progress', 'code': 0}, 
> 'progress': 0} (api:43)
> 2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC 
> call VM.migrate succeeded in 0.01 seconds (__init__:515)
> 2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM 
> took: 0 seconds (migration:455)
> 2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to 
> qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri 
> tcp://192.168.55.81 (migration:480)
> 2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on 
> 'vdsmbr_QwORbsw2': No such device (migration:287)
> 2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate 
> (migration:429)
> Traceback (most recent call last):
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in 
> run
>self._startUnderlyingMigration(time.time())
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in 
> _startUnderlyingMigration
>self._perform_with_downtime_thread(duri, muri)
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in 
> _perform_with_downtime_thread
>self._perform_migration(duri, muri)
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, in 
> _perform_migration
>self._vm._dom.migrateToURI3(duri, params, flags)
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
>ret = attr(*args, **kwargs)
>  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, 
> in wrapper
>ret = f(*args, **kwargs)
>  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
>return func(inst, *args, **kwargs)
>  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in 
> migrateToURI3
>if ret == -1: raise 

[ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'

2017-03-27 Thread Devin A. Bougie
Hi, All.  We have a new oVirt 4.1.1 cluster up with the OVS switch type.  
Everything seems to be working great, except for live migration.

I believe the red flag in vdsm.log on the source is:
Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)

Which results from vdsm assigning an arbitrary bridge name to each ovs bridge.

Please see below for more details on the bridges and excerpts from the logs.  
Any help would be greatly appreciated.

Many thanks,
Devin

SOURCE OVS BRIDGES:
# ovs-vsctl show
6d96d9a5-e30d-455b-90c7-9e9632574695
Bridge "vdsmbr_QwORbsw2"
Port "vdsmbr_QwORbsw2"
Interface "vdsmbr_QwORbsw2"
type: internal
Port "vnet0"
Interface "vnet0"
Port classepublic
Interface classepublic
type: internal
Port "ens1f0"
Interface "ens1f0"
Bridge "vdsmbr_9P7ZYKWn"
Port ovirtmgmt
Interface ovirtmgmt
type: internal
Port "ens1f1"
Interface "ens1f1"
Port "vdsmbr_9P7ZYKWn"
Interface "vdsmbr_9P7ZYKWn"
type: internal
ovs_version: "2.7.0"

DESTINATION OVS BRIDGES:
# ovs-vsctl show
f66d765d-712a-4c81-b18e-da1acc9cfdde
Bridge "vdsmbr_vdpp0dOd"
Port "vdsmbr_vdpp0dOd"
Interface "vdsmbr_vdpp0dOd"
type: internal
Port "ens1f0"
Interface "ens1f0"
Port classepublic
Interface classepublic
type: internal
Bridge "vdsmbr_3sEwEKd1"
Port "vdsmbr_3sEwEKd1"
Interface "vdsmbr_3sEwEKd1"
type: internal
Port "ens1f1"
Interface "ens1f1"
Port ovirtmgmt
Interface ovirtmgmt
type: internal
ovs_version: "2.7.0"


SOURCE VDSM LOG:
...
2017-03-27 10:57:02,567-0400 INFO  (jsonrpc/1) [vdsm.api] START migrate 
args=(, {u'incomingLimit': 2, u'src': 
u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', 
u'tunneled': u'false', u'enableGuestEvents': False, u'dst': 
u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': 
u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', 
u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, u'method': 
u'online', 'mode': 'remote'}) kwargs={} (api:37)
2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [vdsm.api] FINISH migrate 
return={'status': {'message': 'Migration in progress', 'code': 0}, 'progress': 
0} (api:43)
2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
VM.migrate succeeded in 0.01 seconds (__init__:515)
2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM took: 
0 seconds (migration:455)
2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to 
qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri 
tcp://192.168.55.81 (migration:480)
2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on 
'vdsmbr_QwORbsw2': No such device (migration:287)
2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate (migration:429)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in 
run
self._startUnderlyingMigration(time.time())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in 
_startUnderlyingMigration
self._perform_with_downtime_thread(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in 
_perform_with_downtime_thread
self._perform_migration(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, in 
_perform_migration
self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, 
in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in 
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)
libvirtError: Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device
2017-03-27 10:57:03,435-0400 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:33716 
(protocoldetector:72)
2017-03-27 10:57:03,452-0400 INFO  (Reactor thread) [ProtocolDetector.Detector] 
Detected protocol stomp from ::1:33716 (protocoldetector:127)
2017-03-27 10:57:03,452-0400 INFO  (Reactor thread) [Broker.StompAdapter] 
Processing CONNECT 

Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Vinzenz Feenstra

> On Jun 17, 2016, at 12:47 PM, Vinzenz Feenstra  wrote:
> 
>> 
>> On Jun 17, 2016, at 12:42 PM, Michal Skrivanek > > wrote:
>> 
>> 
>>> On 17 Jun 2016, at 12:37, Fabrice Bacchella >> > wrote:
>>> 
>>> 
 Le 17 juin 2016 à 12:33, Vinzenz Feenstra > a écrit :
 
 
> On Jun 17, 2016, at 12:12 PM, Fabrice Bacchella 
> > 
> wrote:
> 
> 
>> Le 17 juin 2016 à 12:05, Vinzenz Feenstra > > a écrit :
>> 
>> Hi Fabrice,
>> 
>>> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
>>> > 
>>> wrote:
>>> 
>>> I'm running an up to date ovirt setup.
>>> 
>>> I tried to put an host in maintenance mode, with one VM running on it.
>>> 
>>> It failed with this message in vdsm.log:
>>> 
> 
>>> libvirtError: internal error: process exited while connecting to 
>>> monitor: 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
>>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>>>  Failed to bind socket to 
>>> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>>>  Permission denied
>> 
>> This is pretty odd, could you please send me the out put of this:
>> 
>> # rpm -qa | grep vdsm
>> 
>> From the target and destination hosts. Thanks.
> 
>>> 
 
 Thanks.
 
 And on the destination server what are the access rights on 
 /var/lib/libvirt/qemu/channels? 
>>> On both:
>>> drwxrwxr-x 2 vdsm qemu 137 Jun 14 15:35 /var/lib/libvirt/qemu/channels
>>> drwxrwxr-x 2 vdsm qemu 6 May 24 16:03 /var/lib/libvirt/qemu/channels
>>> 
 And if you have SELinux enabled can you temporary set it to permissive on 
 the destination and try to migrate?
>>> 
>>> SELinux is disabled on both.
>> 
>> And was the VM started in the same SELinux state or did you change it 
>> afterwards while it was running?
> 
> It is disabled since installation (We moved the conversation for now to the 
> IRC) 
> 
> If we found a solution / reason I will respond to the thread to have it 
> documented.

So the reason for the errors is wrongly set ownership of the 
/var/lib/libvirt/qemu folder rwxr-x--x 8 oneadmin oneadmin 


> 
>> 
>>> 
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users 
>>> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Vinzenz Feenstra

> On Jun 17, 2016, at 12:42 PM, Michal Skrivanek  
> wrote:
> 
> 
>> On 17 Jun 2016, at 12:37, Fabrice Bacchella > > wrote:
>> 
>> 
>>> Le 17 juin 2016 à 12:33, Vinzenz Feenstra >> > a écrit :
>>> 
>>> 
 On Jun 17, 2016, at 12:12 PM, Fabrice Bacchella 
 > wrote:
 
 
> Le 17 juin 2016 à 12:05, Vinzenz Feenstra  > a écrit :
> 
> Hi Fabrice,
> 
>> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
>> > 
>> wrote:
>> 
>> I'm running an up to date ovirt setup.
>> 
>> I tried to put an host in maintenance mode, with one VM running on it.
>> 
>> It failed with this message in vdsm.log:
>> 
 
>> libvirtError: internal error: process exited while connecting to 
>> monitor: 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>>  Failed to bind socket to 
>> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>>  Permission denied
> 
> This is pretty odd, could you please send me the out put of this:
> 
> # rpm -qa | grep vdsm
> 
> From the target and destination hosts. Thanks.
 
>> 
>>> 
>>> Thanks.
>>> 
>>> And on the destination server what are the access rights on 
>>> /var/lib/libvirt/qemu/channels? 
>> On both:
>> drwxrwxr-x 2 vdsm qemu 137 Jun 14 15:35 /var/lib/libvirt/qemu/channels
>> drwxrwxr-x 2 vdsm qemu 6 May 24 16:03 /var/lib/libvirt/qemu/channels
>> 
>>> And if you have SELinux enabled can you temporary set it to permissive on 
>>> the destination and try to migrate?
>> 
>> SELinux is disabled on both.
> 
> And was the VM started in the same SELinux state or did you change it 
> afterwards while it was running?

It is disabled since installation (We moved the conversation for now to the 
IRC) 

If we found a solution / reason I will respond to the thread to have it 
documented.

> 
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Michal Skrivanek

> On 17 Jun 2016, at 12:37, Fabrice Bacchella  
> wrote:
> 
> 
>> Le 17 juin 2016 à 12:33, Vinzenz Feenstra > > a écrit :
>> 
>> 
>>> On Jun 17, 2016, at 12:12 PM, Fabrice Bacchella 
>>> > wrote:
>>> 
>>> 
 Le 17 juin 2016 à 12:05, Vinzenz Feenstra > a écrit :
 
 Hi Fabrice,
 
> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
> > 
> wrote:
> 
> I'm running an up to date ovirt setup.
> 
> I tried to put an host in maintenance mode, with one VM running on it.
> 
> It failed with this message in vdsm.log:
> 
>>> 
> libvirtError: internal error: process exited while connecting to monitor: 
> 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>  Failed to bind socket to 
> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>  Permission denied
 
 This is pretty odd, could you please send me the out put of this:
 
 # rpm -qa | grep vdsm
 
 From the target and destination hosts. Thanks.
>>> 
> 
>> 
>> Thanks.
>> 
>> And on the destination server what are the access rights on 
>> /var/lib/libvirt/qemu/channels? 
> On both:
> drwxrwxr-x 2 vdsm qemu 137 Jun 14 15:35 /var/lib/libvirt/qemu/channels
> drwxrwxr-x 2 vdsm qemu 6 May 24 16:03 /var/lib/libvirt/qemu/channels
> 
>> And if you have SELinux enabled can you temporary set it to permissive on 
>> the destination and try to migrate?
> 
> SELinux is disabled on both.

And was the VM started in the same SELinux state or did you change it 
afterwards while it was running?

> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Fabrice Bacchella

> Le 17 juin 2016 à 12:33, Vinzenz Feenstra  a écrit :
> 
> 
>> On Jun 17, 2016, at 12:12 PM, Fabrice Bacchella 
>> > wrote:
>> 
>> 
>>> Le 17 juin 2016 à 12:05, Vinzenz Feenstra >> > a écrit :
>>> 
>>> Hi Fabrice,
>>> 
 On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
 > wrote:
 
 I'm running an up to date ovirt setup.
 
 I tried to put an host in maintenance mode, with one VM running on it.
 
 It failed with this message in vdsm.log:
 
>> 
 libvirtError: internal error: process exited while connecting to monitor: 
 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
 socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
  Failed to bind socket to 
 /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
  Permission denied
>>> 
>>> This is pretty odd, could you please send me the out put of this:
>>> 
>>> # rpm -qa | grep vdsm
>>> 
>>> From the target and destination hosts. Thanks.
>> 

> 
> Thanks.
> 
> And on the destination server what are the access rights on 
> /var/lib/libvirt/qemu/channels? 
On both:
drwxrwxr-x 2 vdsm qemu 137 Jun 14 15:35 /var/lib/libvirt/qemu/channels
drwxrwxr-x 2 vdsm qemu 6 May 24 16:03 /var/lib/libvirt/qemu/channels

> And if you have SELinux enabled can you temporary set it to permissive on the 
> destination and try to migrate?

SELinux is disabled on both.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Vinzenz Feenstra

> On Jun 17, 2016, at 12:12 PM, Fabrice Bacchella 
>  wrote:
> 
> 
>> Le 17 juin 2016 à 12:05, Vinzenz Feenstra > > a écrit :
>> 
>> Hi Fabrice,
>> 
>>> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
>>> > wrote:
>>> 
>>> I'm running an up to date ovirt setup.
>>> 
>>> I tried to put an host in maintenance mode, with one VM running on it.
>>> 
>>> It failed with this message in vdsm.log:
>>> 
> 
>>> libvirtError: internal error: process exited while connecting to monitor: 
>>> 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
>>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>>>  Failed to bind socket to 
>>> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>>>  Permission denied
>> 
>> This is pretty odd, could you please send me the out put of this:
>> 
>> # rpm -qa | grep vdsm
>> 
>> From the target and destination hosts. Thanks.
> 
> On the host I was trying to put on maintenance:
> vdsm-xmlrpc-4.17.28-0.el7.centos.noarch
> vdsm-4.17.28-0.el7.centos.noarch
> vdsm-infra-4.17.28-0.el7.centos.noarch
> vdsm-yajsonrpc-4.17.28-0.el7.centos.noarch
> vdsm-python-4.17.28-0.el7.centos.noarch
> vdsm-jsonrpc-4.17.28-0.el7.centos.noarch
> vdsm-hook-vmfex-dev-4.17.28-0.el7.centos.noarch
> vdsm-cli-4.17.28-0.el7.centos.noarch
> 
> And it was trying to send to an host with:
> vdsm-yajsonrpc-4.17.28-1.el7.noarch
> vdsm-cli-4.17.28-1.el7.noarch
> vdsm-python-4.17.28-1.el7.noarch
> vdsm-hook-vmfex-dev-4.17.28-1.el7.noarch
> vdsm-xmlrpc-4.17.28-1.el7.noarch
> vdsm-4.17.28-1.el7.noarch
> vdsm-infra-4.17.28-1.el7.noarch
> vdsm-jsonrpc-4.17.28-1.el7.noarch
> 
> And in the log about that:
> jsonrpc.Executor/1::DEBUG::2016-06-17 
> 11:39:57,233::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 
> 'VM.migrate' in bridge with {u'params': {u
> 'tunneled': u'false', u'dstqemu': u'XX.XX.XX.28', u'autoConverge': u'false', 
> u'src': u'nb0101.XXX', u'dst': u'nb0105.XXX:54321', u
> 'vmId': u'b82209c9-42ff-457c-bb98-b6a2034833fc', u'abortOnError': u'true', 
> u'compressed': u'false', u'method': u'online'}, u'vmID': 
> u'b82209c9-42ff-457c-bb9
> 8-b6a2034833fc'}
> jsonrpc.Executor/1::DEBUG::2016-06-17 11:39:57,234::API::547::vds::(migrate) 
> {u'tunneled': u'false', u'dstqemu': u'XX.XX.XX.28', u'autoConverge': 
> u'false', 
> u'src': u'nb0101.XXX', u'dst': u'nb0105.XXX:54321', u'vmId': 
> u'b82209c9-42ff-457c-bb98-b6a2034833fc', u'abortOnError': u'true', u'
> compressed': u'false', u'method': u'online’}

Thanks.

And on the destination server what are the access rights on 
/var/lib/libvirt/qemu/channels? 
And if you have SELinux enabled can you temporary set it to permissive on the 
destination and try to migrate?


> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Fabrice Bacchella

> Le 17 juin 2016 à 12:05, Vinzenz Feenstra  a écrit :
> 
> Hi Fabrice,
> 
>> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
>> > wrote:
>> 
>> I'm running an up to date ovirt setup.
>> 
>> I tried to put an host in maintenance mode, with one VM running on it.
>> 
>> It failed with this message in vdsm.log:
>> 

>> libvirtError: internal error: process exited while connecting to monitor: 
>> 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>>  Failed to bind socket to 
>> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>>  Permission denied
> 
> This is pretty odd, could you please send me the out put of this:
> 
> # rpm -qa | grep vdsm
> 
> From the target and destination hosts. Thanks.

On the host I was trying to put on maintenance:
vdsm-xmlrpc-4.17.28-0.el7.centos.noarch
vdsm-4.17.28-0.el7.centos.noarch
vdsm-infra-4.17.28-0.el7.centos.noarch
vdsm-yajsonrpc-4.17.28-0.el7.centos.noarch
vdsm-python-4.17.28-0.el7.centos.noarch
vdsm-jsonrpc-4.17.28-0.el7.centos.noarch
vdsm-hook-vmfex-dev-4.17.28-0.el7.centos.noarch
vdsm-cli-4.17.28-0.el7.centos.noarch

And it was trying to send to an host with:
vdsm-yajsonrpc-4.17.28-1.el7.noarch
vdsm-cli-4.17.28-1.el7.noarch
vdsm-python-4.17.28-1.el7.noarch
vdsm-hook-vmfex-dev-4.17.28-1.el7.noarch
vdsm-xmlrpc-4.17.28-1.el7.noarch
vdsm-4.17.28-1.el7.noarch
vdsm-infra-4.17.28-1.el7.noarch
vdsm-jsonrpc-4.17.28-1.el7.noarch

And in the log about that:
jsonrpc.Executor/1::DEBUG::2016-06-17 
11:39:57,233::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 
'VM.migrate' in bridge with {u'params': {u
'tunneled': u'false', u'dstqemu': u'XX.XX.XX.28', u'autoConverge': u'false', 
u'src': u'nb0101.XXX', u'dst': u'nb0105.XXX:54321', u
'vmId': u'b82209c9-42ff-457c-bb98-b6a2034833fc', u'abortOnError': u'true', 
u'compressed': u'false', u'method': u'online'}, u'vmID': 
u'b82209c9-42ff-457c-bb9
8-b6a2034833fc'}
jsonrpc.Executor/1::DEBUG::2016-06-17 11:39:57,234::API::547::vds::(migrate) 
{u'tunneled': u'false', u'dstqemu': u'XX.XX.XX.28', u'autoConverge': u'false', 
u'src': u'nb0101.XXX', u'dst': u'nb0105.XXX:54321', u'vmId': 
u'b82209c9-42ff-457c-bb98-b6a2034833fc', u'abortOnError': u'true', u'
compressed': u'false', u'method': u'online'}

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Vinzenz Feenstra
Hi Fabrice,

> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
>  wrote:
> 
> I'm running an up to date ovirt setup.
> 
> I tried to put an host in maintenance mode, with one VM running on it.
> 
> It failed with this message in vdsm.log:
> 
> Thread-351083::ERROR::2016-06-17 
> 11:30:04,732::migration::209::virt.vm::(_recover) 
> vmId=`b82209c9-42ff-457c-bb98-b6a2034833fc`::internal error: process exited 
> while connecting to monitor: 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>  Failed to bind socket to 
> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>  Permission denied
> ...
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/migration.py", line 298, in run
> self._startUnderlyingMigration(time.time())
>   File "/usr/share/vdsm/virt/migration.py", line 364, in 
> _startUnderlyingMigration
> self._perform_migration(duri, muri)
>   File "/usr/share/vdsm/virt/migration.py", line 403, in _perform_migration
> self._vm._dom.migrateToURI3(duri, params, flags)
>   File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
> 124, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836, in 
> migrateToURI3
> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
> dom=self)
> libvirtError: internal error: process exited while connecting to monitor: 
> 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>  Failed to bind socket to 
> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>  Permission denied

This is pretty odd, could you please send me the out put of this:

# rpm -qa | grep vdsm

From the target and destination hosts. Thanks.

> 
> If i check the file, I see :
> 
> srwxrwxr-x 1 qemu qemu 0 May 31 16:21 
> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm
> 
> And on all my hosts, the permissions are the same:
> srwxrwxr-x 1 qemu qemu /var/lib/libvirt/qemu/channels/*
> 
> And vdsm is running vdsm:
> 4 S vdsm  3816 1  0  60 -20 - 947345 poll_s May25 ?   02:21:58 
> /usr/bin/python /usr/share/vdsm/vdsm
> 
> If I check vdsm groups:
> ~# id vdsm
> uid=36(vdsm) gid=36(kvm) groups=36(kvm),179(sanlock),107(qemu)
> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] migration failed with permission denied

2016-06-17 Thread Fabrice Bacchella
I'm running an up to date ovirt setup.

I tried to put an host in maintenance mode, with one VM running on it.

It failed with this message in vdsm.log:

Thread-351083::ERROR::2016-06-17 
11:30:04,732::migration::209::virt.vm::(_recover) 
vmId=`b82209c9-42ff-457c-bb98-b6a2034833fc`::internal error: process exited 
while connecting to monitor: 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
 Failed to bind socket to 
/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
 Permission denied
...
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/migration.py", line 298, in run
self._startUnderlyingMigration(time.time())
  File "/usr/share/vdsm/virt/migration.py", line 364, in 
_startUnderlyingMigration
self._perform_migration(duri, muri)
  File "/usr/share/vdsm/virt/migration.py", line 403, in _perform_migration
self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124, 
in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836, in 
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)
libvirtError: internal error: process exited while connecting to monitor: 
2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
 Failed to bind socket to 
/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
 Permission denied

If i check the file, I see :

srwxrwxr-x 1 qemu qemu 0 May 31 16:21 
/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm

And on all my hosts, the permissions are the same:
srwxrwxr-x 1 qemu qemu /var/lib/libvirt/qemu/channels/*

And vdsm is running vdsm:
4 S vdsm  3816 1  0  60 -20 - 947345 poll_s May25 ?   02:21:58 
/usr/bin/python /usr/share/vdsm/vdsm

If I check vdsm groups:
~# id vdsm
uid=36(vdsm) gid=36(kvm) groups=36(kvm),179(sanlock),107(qemu)




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed no available host found ....

2016-02-15 Thread Michal Skrivanek

> On 15 Feb 2016, at 12:00, Jean-Pierre Ribeauville <jpribeauvi...@axway.com> 
> wrote:
> 
> Hi,
>  
> You hit the target !!!
>  
> I enable overcommitting on the destination , then I’m able to migrate towards 
> it.
>  
> Now I’ve to clarify my  Guests memory requirements.
>  
> Thx for your help.
>  
> Regards,
>  
> J.P.
>  
> _
> De : Jean-Pierre Ribeauville 
> Envoyé : lundi 15 février 2016 11:03
> À : 'ILanit Stein'
> Cc : users@ovirt.org <mailto:users@ovirt.org>
> Objet : RE: [ovirt-users] migration failed no available host found 
>  
>  
> Hi,
>  
> Within the ovirt GUI , I got this :
>  
> Max free Memory for scheduling new VMs : 0 Mb
>  
> It seems to be the root cause of my issue .
>  
> Vmstat -s ran on the destination host returns :
>  
> [root@ldc01omv01 vdsm]# vmstat -s
>  49182684 K total memory
>   4921536 K used memory
>   5999188 K active memory
>   1131436 K inactive memory
>  39891992 K free memory
>  2344 K buffer memory
>   4366812 K swap cache
>  24707068 K total swap
> 0 K used swap
>  24707068 K free swap
>   3090822 non-nice user cpu ticks
>  8068 nice user cpu ticks
>   2637035 system cpu ticks
> 804915819 idle cpu ticks
>298074 IO-wait cpu ticks
> 6 IRQ cpu ticks
>  5229 softirq cpu ticks
> 0 stolen cpu ticks
>  58678411 pages paged in
>  78586581 pages paged out
> 0 pages swapped in
> 0 pages swapped out
> 541412845 interrupts
>1224374736 CPU context switches
>1455276687 boot time
>476762 forks
> [root@ldc01omv01 vdsm]#
>  
>  
> Is it vdsm that returns this info to ovirt ?
>  
> I tried a migration this morning at 10/04.
>  
> I attached  vdsm destination log .
>  
> << Fichier: vdsm.log >> 
>  
> Is it worth to increase of level of destination log ?
>  
>  
> Thx for help.
>  
> Regards,
>  
> J.P.
>  
> -Message d'origine-
> De : ILanit Stein [mailto:ist...@redhat.com <mailto:ist...@redhat.com>] 
> Envoyé : dimanche 14 février 2016 10:05
> À : Jean-Pierre Ribeauville
> Cc : users@ovirt.org <mailto:users@ovirt.org>
> Objet : Re: [ovirt-users] migration failed no available host found 
>  
> Hi Jean-Pierre,
>  
> Seems by the log you've sent that the destination host, ldc01omv01, is 
> filtered out, cause of lake of memory.
> Is there enough memory on the destination, to run this VM?
>  
> Would you please send the source/destination hosts /var/log/vdsm/vdsm.log, 
> /var/log/libvirt/qemu/VM_RHEL7-2, and /var/log/vdsm/libvirt.log, to provide 
> more details.
>  
> Thanks,
> Ilanit.
>  
> - Original Message -
> From: "Jean-Pierre Ribeauville" <jpribeauvi...@axway.com 
> <mailto:jpribeauvi...@axway.com>>
> To: users@ovirt.org <mailto:users@ovirt.org>
> Sent: Friday, February 12, 2016 4:59:20 PM
> Subject: [ovirt-users] migration failed no available host found 
>  
>  
>  
> Hi, 
>  
>  
>  
> When trying to migrate a Guest between two nodes of a cluster (from node1 to 
> ldc01omv01) , I got this error ( in ovirt/engine.log file) : 
>  
>  
>  
> 2016-02-12 15:05:31,485 INFO 
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
> (ajp-/127.0.0.1:8702-4) [ff31b86] Candidate host ldc01omv01 
> (09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
> VAR__FILTERTYPE__INTERNAL filter Memory 
>  
> 2016-02-12 15:05:31,495 DEBUG 
> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
> (org.ovirt.thread.pool-7-thread-34) About to run task 
> java.util.concurrent.FutureTask from : java.lang.Exception 
>  
> at 
> org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.beforeExecute(ThreadPoolUtil.java:52)
>  [utils.jar:] 
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [rt.jar:1.7.0_85] 
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [rt.jar:1.7.0_85] 
>  
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_85] 
>  
>  
>  
> 2016-02-12 15:05:31,502 INFO 
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
> (org.ovirt.thread.pool-7-thread-34) [ff31b86] Running command: 
> MigrateVmToServerCommand internal: false. Entities affected : ID: 
> b77e6171-cbdf-44cb-b851-1b776a3fb616 Type: VMAction group MIGRATE_VM with 
> role type USER 
>  
> 2016-02-12 15:05:31,505 INFO 
> [org.ovirt.engine.cor

Re: [ovirt-users] migration failed no available host found ....

2016-02-15 Thread Jean-Pierre Ribeauville
Hi,

You hit the target !!!

I enable overcommitting on the destination , then  I’m able to migrate towards 
it.

Now I’ve to clarify my  Guests  memory requirements.

Thx for your help.

Regards,

J.P.

_
De : Jean-Pierre Ribeauville
Envoyé : lundi 15 février 2016 11:03
À : 'ILanit Stein'
Cc : users@ovirt.org
Objet : RE: [ovirt-users] migration failed no available host found 


Hi,

Within the ovirt GUI , I got this :

Max free Memory for scheduling new VMs : 0 Mb

It seems to be the root cause of my issue .

Vmstat -s ran on the destination host returns :

[root@ldc01omv01 vdsm]# vmstat -s
 49182684 K total memory
  4921536 K used memory
  5999188 K active memory
  1131436 K inactive memory
 39891992 K free memory
 2344 K buffer memory
  4366812 K swap cache
 24707068 K total swap
0 K used swap
 24707068 K free swap
  3090822 non-nice user cpu ticks
 8068 nice user cpu ticks
  2637035 system cpu ticks
804915819 idle cpu ticks
   298074 IO-wait cpu ticks
6 IRQ cpu ticks
 5229 softirq cpu ticks
0 stolen cpu ticks
 58678411 pages paged in
 78586581 pages paged out
0 pages swapped in
0 pages swapped out
541412845 interrupts
   1224374736 CPU context switches
   1455276687 boot time
   476762 forks
[root@ldc01omv01 vdsm]#


Is it vdsm that returns this info to ovirt ?

I tried a migration this morning at 10/04.

I attached  vdsm destination log .

 << Fichier: vdsm.log >>

Is it worth to increase of level of destination log ?


Thx for help.

Regards,

J.P.

-Message d'origine-
De : ILanit Stein [mailto:ist...@redhat.com]
Envoyé : dimanche 14 février 2016 10:05
À : Jean-Pierre Ribeauville
Cc : users@ovirt.org
Objet : Re: [ovirt-users] migration failed no available host found 

Hi Jean-Pierre,

Seems by the log you've sent that the destination host, ldc01omv01, is filtered 
out, cause of lake of memory.
Is there enough memory on the destination, to run this VM?

Would you please send the source/destination hosts /var/log/vdsm/vdsm.log, 
/var/log/libvirt/qemu/VM_RHEL7-2, and /var/log/vdsm/libvirt.log, to provide 
more details.

Thanks,
Ilanit.

- Original Message -
From: "Jean-Pierre Ribeauville" 
<jpribeauvi...@axway.com<mailto:jpribeauvi...@axway.com>>
To: users@ovirt.org<mailto:users@ovirt.org>
Sent: Friday, February 12, 2016 4:59:20 PM
Subject: [ovirt-users] migration failed no available host found 



Hi,



When trying to migrate a Guest between two nodes of a cluster (from node1 to 
ldc01omv01) , I got this error ( in ovirt/engine.log file) :



2016-02-12 15:05:31,485 INFO 
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(ajp-/127.0.0.1:8702-4) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory

2016-02-12 15:05:31,495 DEBUG 
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
(org.ovirt.thread.pool-7-thread-34) About to run task 
java.util.concurrent.FutureTask from : java.lang.Exception

at 
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.beforeExecute(ThreadPoolUtil.java:52)
 [utils.jar:]

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[rt.jar:1.7.0_85]

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[rt.jar:1.7.0_85]

at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_85]



2016-02-12 15:05:31,502 INFO 
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Running command: 
MigrateVmToServerCommand internal: false. Entities affected : ID: 
b77e6171-cbdf-44cb-b851-1b776a3fb616 Type: VMAction group MIGRATE_VM with role 
type USER

2016-02-12 15:05:31,505 INFO 
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory (correlation id: ff31b86)

2016-02-12 15:05:31,509 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Correlation ID: ff31b86, Job ID: 
7b917604-f487-43a3-9cd2-4e7f95e545cc, Call Stack: null, Custom Event ID: -1, 
Message: Migration failed, No available host found (VM: VM_RHEL7-2, Source: 
node1).





In ovirt GUI nothing strange .



How may I go further to investigate this issue ?





Thx for help.



Regards,






J.P. Ribeauville




P: +33.(0).1.47.17.20.49

.

Puteaux 3 Etage 5 Bureau 4



jpribeauvi...@axway.com<mailto:jpribeauvi...@axway.com>
http://www.axway.com






P Pensez à l’environnement avant d’imprimer.





___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.o

Re: [ovirt-users] migration failed no available host found ....

2016-02-14 Thread ILanit Stein
Hi Jean-Pierre,

Seems by the log you've sent that the destination host, ldc01omv01, is filtered 
out, cause of lake of memory.
Is there enough memory on the destination, to run this VM?

Would you please send the source/destination hosts /var/log/vdsm/vdsm.log, 
/var/log/libvirt/qemu/VM_RHEL7-2, and /var/log/vdsm/libvirt.log,
to provide more details.

Thanks,
Ilanit.

- Original Message -
From: "Jean-Pierre Ribeauville" <jpribeauvi...@axway.com>
To: users@ovirt.org
Sent: Friday, February 12, 2016 4:59:20 PM
Subject: [ovirt-users] migration failed no available host found 



Hi, 



When trying to migrate a Guest between two nodes of a cluster (from node1 to 
ldc01omv01) , I got this error ( in ovirt/engine.log file) : 



2016-02-12 15:05:31,485 INFO 
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(ajp-/127.0.0.1:8702-4) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory 

2016-02-12 15:05:31,495 DEBUG 
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
(org.ovirt.thread.pool-7-thread-34) About to run task 
java.util.concurrent.FutureTask from : java.lang.Exception 

at 
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.beforeExecute(ThreadPoolUtil.java:52)
 [utils.jar:] 

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[rt.jar:1.7.0_85] 

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[rt.jar:1.7.0_85] 

at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_85] 



2016-02-12 15:05:31,502 INFO 
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Running command: 
MigrateVmToServerCommand internal: false. Entities affected : ID: 
b77e6171-cbdf-44cb-b851-1b776a3fb616 Type: VMAction group MIGRATE_VM with role 
type USER 

2016-02-12 15:05:31,505 INFO 
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory (correlation id: ff31b86) 

2016-02-12 15:05:31,509 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Correlation ID: ff31b86, Job ID: 
7b917604-f487-43a3-9cd2-4e7f95e545cc, Call Stack: null, Custom Event ID: -1, 
Message: Migration failed, No available host found (VM: VM_RHEL7-2, Source: 
node1). 





In ovirt GUI nothing strange . 



How may I go further to investigate this issue ? 





Thx for help. 



Regards, 






J.P. Ribeauville 




P: +33.(0).1.47.17.20.49 

. 

Puteaux 3 Etage 5 Bureau 4 



jpribeauvi...@axway.com 
http://www.axway.com 






P Pensez à l’environnement avant d’imprimer. 





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] migration failed no available host found ....

2016-02-12 Thread Jean-Pierre Ribeauville
Hi,

When trying  to migrate a Guest between two nodes of a cluster  (from node1 to 
ldc01omv01),  I got this error ( in ovirt/engine.log file) :

2016-02-12 15:05:31,485 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(ajp-/127.0.0.1:8702-4) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory
2016-02-12 15:05:31,495 DEBUG 
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
(org.ovirt.thread.pool-7-thread-34) About to run task 
java.util.concurrent.FutureTask from : java.lang.Exception
at 
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.beforeExecute(ThreadPoolUtil.java:52)
 [utils.jar:]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[rt.jar:1.7.0_85]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[rt.jar:1.7.0_85]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_85]

2016-02-12 15:05:31,502 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Running command: 
MigrateVmToServerCommand internal: false. Entities affected :  ID: 
b77e6171-cbdf-44cb-b851-1b776a3fb616 Type: VMAction group MIGRATE_VM with role 
type USER
2016-02-12 15:05:31,505 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory (correlation id: ff31b86)
2016-02-12 15:05:31,509 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Correlation ID: ff31b86, Job ID: 
7b917604-f487-43a3-9cd2-4e7f95e545cc, Call Stack: null, Custom Event ID: -1, 
Message: Migration failed, No available host found (VM: VM_RHEL7-2, Source: 
node1).


In ovirt GUI nothing strange .

How may I go further to investigate this issue ?


Thx for help.

Regards,


J.P. Ribeauville


P: +33.(0).1.47.17.20.49
.
Puteaux 3 Etage 5  Bureau 4

jpribeauvi...@axway.com
http://www.axway.com



P Pensez à l'environnement avant d'imprimer.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed

2015-12-10 Thread Yaniv Dary
We need logs to help, please attach them.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Wed, Dec 9, 2015 at 2:07 PM, Massimo Mad  wrote:

> Hi Michal,
> This is my configuration end the error:
>
> 1 start to migrate the vm from cluster in centos 6.x to cluster un centos
> bare-metal 7.x
> Migration started (VM: Spacewalk, Source: ovirtxx3, Destination: ovirtxx5,
> User: admin@internal).
> 2 first error:Migration failed due to Error: Fatal error during
> migration. Trying to migrate to another Host (VM: Spacewalkp, Source:
> ovirtxx03, Destination: ovirtxx05).
> 3 Second error: Migration failed, No available host found (VM: Spacewalk,
> Source: ovirtxx3).
>
>
> Regards
> Massimo
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Migration failed

2015-12-09 Thread Massimo Mad
Hi Michal,
This is my configuration end the error:

1 start to migrate the vm from cluster in centos 6.x to cluster un centos
bare-metal 7.x
Migration started (VM: Spacewalk, Source: ovirtxx3, Destination: ovirtxx5,
User: admin@internal).
2 first error:Migration failed due to Error: Fatal error during
migration. Trying to migrate to another Host (VM: Spacewalkp, Source:
ovirtxx03, Destination: ovirtxx05).
3 Second error: Migration failed, No available host found (VM: Spacewalk,
Source: ovirtxx3).


Regards
Massimo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed

2015-12-06 Thread Michal Skrivanek


> On 04 Dec 2015, at 18:56, Massimo Mad  wrote:
> 
> Hi,
> I want to upgrade my oVirt infrastructure, host on host centos 6.x on bare 
> metal 7.x.
> I created a new cluster with inside the new host, and when I try to migrate 
> the vm from one cluster to another I have the following messages:

Cross cluster migration is for el6 to el7 upgrade only, one way. 

> Migration failed, No available hosts found
> Migration failed two to Error: Fatal Error during migration. Trying to 
> migrate to another Host

Please describe your steps, setup, and errors in more detail

Thanks,
michal

> I checked the host file and the certificates and everything is fine
> Regards
> Massimo
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Migration failed

2015-12-04 Thread Massimo Mad
Hi,
I want to upgrade my oVirt infrastructure, host on host centos 6.x on bare
metal 7.x.
I created a new cluster with inside the new host, and when I try to migrate
the vm from one cluster to another I have the following messages:
Migration failed, No available hosts found
Migration failed two to Error: Fatal Error during migration. Trying to
migrate to another Host
I checked the host file and the certificates and everything is fine
Regards
Massimo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed, No available host found

2015-04-06 Thread Artyom Lukianov
Engine try to migrate vm on some available host, but migration failed, so 
engine try another host. From some reason migration failed on all hosts:
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command 
MigrateStatusVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
MigrateStatusVDS, error = Fatal error during migration, code = 12

For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) from source 
and also from destination hosts.
Thanks


- Original Message -
From: Jason Keltz j...@cse.yorku.ca
To: users users@ovirt.org
Sent: Monday, April 6, 2015 3:47:23 PM
Subject: [ovirt-users] Migration failed, No available host found

Hi.

I have 3 nodes in one cluster and 1 VM running on node2.  I'm trying to 
move the VM to node 1 or node 3, and it fails with the error: Migration 
failed, No available host found

I'm unable to decipher engine.log to determine the cause of the 
problem.  Below is what seems to be the relevant lines from the log.  
Any help would be appreciated.

Thank you!

Jason.

---

2015-04-06 08:31:56,554 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5) 
[3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key: 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM
, sharedLocks= ]
2015-04-06 08:31:56,686 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Running command: 
MigrateVmCommand internal: false. Entities affected :  ID: 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM 
with role type USER,  ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: 
VMAction group EDIT_VM_PROPERTIES with role type USER,  ID: 
8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group 
CREATE_VM with role type USER
2015-04-06 08:31:56,703 INFO 
[org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit]
 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation 
scoring method
2015-04-06 08:31:56,727 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] START, 
MigrateVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, 
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, 
dstHost=192.168.0.36:54321, migrationMethod=ONLINE, 
tunnelMigration=false, migrationDowntime=0), log id: 7555acbd
2015-04-06 08:31:56,728 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] START, 
MigrateBrokerVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, 
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, 
dstHost=192.168.0.36:54321, migrationMethod=ONLINE, 
tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94
2015-04-06 08:31:56,734 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, 
MigrateBrokerVDSCommand, log id: 6d98fb94
2015-04-06 08:31:56,769 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, 
MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd
2015-04-06 08:31:56,778 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496, 
Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom 
Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2, 
Destination: virt3, User: admin@internal).
2015-04-06 08:33:17,633 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up
2015-04-06 08:33:17,633 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list
2015-04-06 08:33:17,661 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2
2015-04-06 08:33:17,666 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] START, 
MigrateStatusVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af), log id: 6c3c9923
2015-04-06 08:33:17,669 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Failed in 
MigrateStatusVDS method
2015-04-06 08:33:17,670 INFO

[ovirt-users] Migration failed, No available host found

2015-04-06 Thread Jason Keltz

Hi.

I have 3 nodes in one cluster and 1 VM running on node2.  I'm trying to 
move the VM to node 1 or node 3, and it fails with the error: Migration 
failed, No available host found


I'm unable to decipher engine.log to determine the cause of the 
problem.  Below is what seems to be the relevant lines from the log.  
Any help would be appreciated.


Thank you!

Jason.

---

2015-04-06 08:31:56,554 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5) 
[3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key: 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM

, sharedLocks= ]
2015-04-06 08:31:56,686 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Running command: 
MigrateVmCommand internal: false. Entities affected :  ID: 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM 
with role type USER,  ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: 
VMAction group EDIT_VM_PROPERTIES with role type USER,  ID: 
8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group 
CREATE_VM with role type USER
2015-04-06 08:31:56,703 INFO 
[org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation 
scoring method
2015-04-06 08:31:56,727 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] START, 
MigrateVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, 
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, 
dstHost=192.168.0.36:54321, migrationMethod=ONLINE, 
tunnelMigration=false, migrationDowntime=0), log id: 7555acbd
2015-04-06 08:31:56,728 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] START, 
MigrateBrokerVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, 
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, 
dstHost=192.168.0.36:54321, migrationMethod=ONLINE, 
tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94
2015-04-06 08:31:56,734 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, 
MigrateBrokerVDSCommand, log id: 6d98fb94
2015-04-06 08:31:56,769 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, 
MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd
2015-04-06 08:31:56,778 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496, 
Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom 
Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2, 
Destination: virt3, User: admin@internal).
2015-04-06 08:33:17,633 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up
2015-04-06 08:33:17,633 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list
2015-04-06 08:33:17,661 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2
2015-04-06 08:33:17,666 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] START, 
MigrateStatusVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af), log id: 6c3c9923
2015-04-06 08:33:17,669 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Failed in 
MigrateStatusVDS method
2015-04-06 08:33:17,670 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return 
value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=12, 
mMessage=Fatal error during migration]]
2015-04-06 08:33:17,670 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] HostName = virt2
2015-04-06 08:33:17,670 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command 
MigrateStatusVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 

Re: [ovirt-users] Migration failed, No available host found

2015-04-06 Thread Jason Keltz

Hi Artyom,

Here are the vdsm logs from  virt1, virt2 (where the node is running), 
and virt3.

The logs from virt2 look suspicious, but still not sure the problem.

http://goo.gl/GjbWUP

Jason.

On 04/06/2015 09:42 AM, Artyom Lukianov wrote:

Engine try to migrate vm on some available host, but migration failed, so 
engine try another host. From some reason migration failed on all hosts:
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command
MigrateStatusVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
MigrateStatusVDS, error = Fatal error during migration, code = 12

For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) from source 
and also from destination hosts.
Thanks


- Original Message -
From: Jason Keltz j...@cse.yorku.ca
To: users users@ovirt.org
Sent: Monday, April 6, 2015 3:47:23 PM
Subject: [ovirt-users] Migration failed, No available host found

Hi.

I have 3 nodes in one cluster and 1 VM running on node2.  I'm trying to
move the VM to node 1 or node 3, and it fails with the error: Migration
failed, No available host found

I'm unable to decipher engine.log to determine the cause of the
problem.  Below is what seems to be the relevant lines from the log.
Any help would be appreciated.

Thank you!

Jason.

---

2015-04-06 08:31:56,554 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5)
[3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key:
9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM
, sharedLocks= ]
2015-04-06 08:31:56,686 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] Running command:
MigrateVmCommand internal: false. Entities affected :  ID:
9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM
with role type USER,  ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type:
VMAction group EDIT_VM_PROPERTIES with role type USER,  ID:
8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group
CREATE_VM with role type USER
2015-04-06 08:31:56,703 INFO
[org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit]
(org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation
scoring method
2015-04-06 08:31:56,727 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] START,
MigrateVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35,
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc,
dstHost=192.168.0.36:54321, migrationMethod=ONLINE,
tunnelMigration=false, migrationDowntime=0), log id: 7555acbd
2015-04-06 08:31:56,728 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] START,
MigrateBrokerVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35,
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc,
dstHost=192.168.0.36:54321, migrationMethod=ONLINE,
tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94
2015-04-06 08:31:56,734 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH,
MigrateBrokerVDSCommand, log id: 6d98fb94
2015-04-06 08:31:56,769 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH,
MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd
2015-04-06 08:31:56,778 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496,
Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2,
Destination: virt3, User: admin@internal).
2015-04-06 08:33:17,633 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo
9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up
2015-04-06 08:33:17,633 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM
9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list
2015-04-06 08:33:17,661 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm
9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2
2015-04-06 08:33:17,666 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-38) [71f97a52] START,
MigrateStatusVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af), log id: 6c3c9923
2015-04-06 08:33:17,669 ERROR

Re: [ovirt-users] Migration failed, No available host found

2015-04-06 Thread Jason Keltz

Hi Artyom,

The problems were caused by an issue with MTU on the hosts.  I have 
rectified the issue and can now migrate hosts.


Jason.

On 04/06/2015 10:57 AM, Jason Keltz wrote:

Hi Artyom,

Here are the vdsm logs from  virt1, virt2 (where the node is running), 
and virt3.

The logs from virt2 look suspicious, but still not sure the problem.

http://goo.gl/GjbWUP

Jason.

On 04/06/2015 09:42 AM, Artyom Lukianov wrote:
Engine try to migrate vm on some available host, but migration 
failed, so engine try another host. From some reason migration failed 
on all hosts:

(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command
MigrateStatusVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
MigrateStatusVDS, error = Fatal error during migration, code = 12

For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) 
from source and also from destination hosts.

Thanks


- Original Message -
From: Jason Keltz j...@cse.yorku.ca
To: users users@ovirt.org
Sent: Monday, April 6, 2015 3:47:23 PM
Subject: [ovirt-users] Migration failed, No available host found

Hi.

I have 3 nodes in one cluster and 1 VM running on node2.  I'm trying to
move the VM to node 1 or node 3, and it fails with the error: Migration
failed, No available host found

I'm unable to decipher engine.log to determine the cause of the
problem.  Below is what seems to be the relevant lines from the log.
Any help would be appreciated.

Thank you!

Jason.

---

2015-04-06 08:31:56,554 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5)
[3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key:
9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM
, sharedLocks= ]
2015-04-06 08:31:56,686 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] Running command:
MigrateVmCommand internal: false. Entities affected :  ID:
9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM
with role type USER,  ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type:
VMAction group EDIT_VM_PROPERTIES with role type USER,  ID:
8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group
CREATE_VM with role type USER
2015-04-06 08:31:56,703 INFO
[org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit] 


(org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation
scoring method
2015-04-06 08:31:56,727 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] START,
MigrateVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35,
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc,
dstHost=192.168.0.36:54321, migrationMethod=ONLINE,
tunnelMigration=false, migrationDowntime=0), log id: 7555acbd
2015-04-06 08:31:56,728 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] START,
MigrateBrokerVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35,
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc,
dstHost=192.168.0.36:54321, migrationMethod=ONLINE,
tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94
2015-04-06 08:31:56,734 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH,
MigrateBrokerVDSCommand, log id: 6d98fb94
2015-04-06 08:31:56,769 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH,
MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd
2015-04-06 08:31:56,778 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496,
Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2,
Destination: virt3, User: admin@internal).
2015-04-06 08:33:17,633 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo
9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up
2015-04-06 08:33:17,633 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM
9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list
2015-04-06 08:33:17,661 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm
9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2
2015-04-06 08:33:17,666 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-38) [71f97a52] START,
MigrateStatusVDSCommand(HostName = virt2, HostId

Re: [ovirt-users] Migration failed due to Error: Fatal error during migration

2014-08-31 Thread Omer Frenkel


- Original Message -
 From: Punit Dambiwal hypu...@gmail.com
 To: Omer Frenkel ofren...@redhat.com
 Cc: Manfred Landauer manfred.landa...@fabasoft.com, users@ovirt.org
 Sent: Friday, August 29, 2014 4:30:38 AM
 Subject: Re: [ovirt-users] Migration failed due to Error: Fatal error during 
 migration
 
 Hi ,
 
 I am also facing the same issue...
 
 here is the engine logs :-
 
 2014-08-29 09:27:45,432 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) FINISH, MigrateStatusVDSCommand, log
 id: 1f3e4161
 2014-08-29 09:27:45,439 INFO
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (org.ovirt.thread.pool-6-thread-24) Correlation ID: 84ff2f4, Job ID:
 8bc0b78a-c600-4f8d-98f8-66a46c66abe0, Call Stack: null, Custom Event ID:
 -1, Message: Migration failed due to Error: Fatal error during migration.

please send vdsm and libvirt log from the src host for this vm

 Trying to migrate to another Host (VM: bc16391ac105b7e68cccb47803906d0b,
 Source: compute4, Destination: compute3).
 2014-08-29 09:27:45,536 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand]
 (org.ovirt.thread.pool-6-thread-24) Running command: MigrateVmCommand
 internal: false. Entities affected :  ID:
 6134b272-cd7f-43c1-a1b1-eaefe69c6b75 Type: VM

i see this vm did migrate successfully, right?

 2014-08-29 09:27:45,610 INFO
  
 [org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit]
 (org.ovirt.thread.pool-6-thread-24) Started HA reservation scoring method
 2014-08-29 09:27:45,666 INFO
  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) START, MigrateVDSCommand(HostName =
 compute4, HostId = 3a7a4504-1434-4fd2-ac00-e8d12c043b37,
 vmId=6134b272-cd7f-43c1-a1b1-eaefe69c6b75, srcHost=compute4.3linux.com,
 dstVdsId=bcd2bd85-c501-4be4-9730-a8662462cab5, dstHost=
 compute2.3linux.com:54321, migrationMethod=ONLINE, tunnelMigration=false,
 migrationDowntime=0), log id: 640b0ccd
 2014-08-29 09:27:45,667 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) VdsBroker::migrate::Entered
 (vm_guid=6134b272-cd7f-43c1-a1b1-eaefe69c6b75, srcHost=compute4.3linux.com,
 dstHost=compute2.3linux.com:54321,  method=online
 2014-08-29 09:27:45,684 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) START, MigrateBrokerVDSCommand(HostName
 = compute4, HostId = 3a7a4504-1434-4fd2-ac00-e8d12c043b37,
 vmId=6134b272-cd7f-43c1-a1b1-eaefe69c6b75, srcHost=compute4.3linux.com,
 dstVdsId=bcd2bd85-c501-4be4-9730-a8662462cab5, dstHost=
 compute2.3linux.com:54321, migrationMethod=ONLINE, tunnelMigration=false,
 migrationDowntime=0), log id: 1b3d3891
 2014-08-29 09:27:45,702 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) FINISH, MigrateBrokerVDSCommand, log
 id: 1b3d3891
 2014-08-29 09:27:45,707 INFO
  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) FINISH, MigrateVDSCommand, return:
 MigratingFrom, log id: 640b0ccd
 2014-08-29 09:27:45,711 INFO
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (org.ovirt.thread.pool-6-thread-24) Correlation ID: 84ff2f4, Job ID:
 8bc0b78a-c600-4f8d-98f8-66a46c66abe0, Call Stack: null, Custom Event ID:
 -1, Message: Migration started (VM: bc16391ac105b7e68cccb47803906d0b,
 Source: compute4, Destination: compute2, User: admin).
 2014-08-29 09:27:47,143 INFO
  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (DefaultQuartzScheduler_Worker-28) START,
 GlusterVolumesListVDSCommand(HostName = compute4, HostId =
 3a7a4504-1434-4fd2-ac00-e8d12c043b37), log id: 9e34612
 2014-08-29 09:27:47,300 INFO
  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (DefaultQuartzScheduler_Worker-28) FINISH, GlusterVolumesListVDSCommand,
 return:
 {e6117925-79b1-417b-9d07-cfc31f68bc51=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@b42f1b41},
 log id: 9e34612
 2014-08-29 09:27:48,306 INFO
  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-51) RefreshVmList vm id
 6134b272-cd7f-43c1-a1b1-eaefe69c6b75 is migrating to vds compute2 ignoring
 it in the refresh until migration is done
 2014-08-29 09:27:51,349 INFO
  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-52) RefreshVmList vm id
 6134b272-cd7f-43c1-a1b1-eaefe69c6b75 is migrating to vds compute2 ignoring
 it in the refresh until migration is done
 2014-08-29 09:27:52,470 INFO
  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (DefaultQuartzScheduler_Worker-49) START,
 GlusterVolumesListVDSCommand(HostName = compute4, HostId =
 3a7a4504-1434-4fd2-ac00-e8d12c043b37), log id: 7cbb4cd
 2014-08-29 09:27:52,624 INFO

Re: [ovirt-users] Migration failed due to Error: Fatal error during migration

2014-08-28 Thread Omer Frenkel


- Original Message -
 From: Manfred Landauer manfred.landa...@fabasoft.com
 To: users@ovirt.org
 Sent: Thursday, August 14, 2014 6:29:21 PM
 Subject: [ovirt-users] Migration failed due to Error: Fatal error during  
 migration
 
 
 
 Hi all
 
 
 
 When we try to migrate a VM on oVirt “Engine Version: 3.4.3-1.el6 ” form host
 A to host B we’ll get this Errormessage: “Migration failed due to Error:
 Fatal error during migration”.
 
 
 
 It looks like, this occurs only when thin provisioned HDD’s attached to the
 VM. VM’s with preallocated HDD’s attached, migrate without a problem.
 
 
 
 Hope someone can help us to solve this issue.
 
 

it looks more like a network error:
Thread-3810747::ERROR::2014-08-14 16:48:45,471::vm::337::vm.Vm::(run) 
vmId=`494f5edc-7edd-4300-a675-f0a8883265e4`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 323, in run
self._startUnderlyingMigration()
  File /usr/share/vdsm/vm.py, line 400, in _startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/vm.py, line 838, in f
ret = attr(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 76, 
in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 1178, in 
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
libvirtError: Unable to read from monitor: Connection reset by peer

could you attach the libvirt log?

 
 Best regards
 
 Manfred
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed due to Error: Fatal error during migration

2014-08-28 Thread Punit Dambiwal
=true, vmId=6134b272-cd7f-43c1-a1b1-eaefe69c6b75,
acpiEnable=true, cpuShares=2, custom={}, spiceSslCipherSuite=DEFAULT,
memSize=4096, smp=1, displayPort=5900, emulatedMachine=rhel6.5.0,
vmType=kvm, status=Up, memGuaranteedSize=512, display=vnc, pid=5718,
smartcardEnable=false, tabletEnable=true, smpCoresPerSocket=1,
spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,
maxVCpus=160, clientIp=, devices=[Ljava.lang.Object;@53ee8569,
vmName=bc16391ac105b7e68cccb47803906d0b, cpuType=Westmere}], log id:
7bd8658c
2014-08-29 09:28:03,548 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-55) Correlation ID: 84ff2f4, Job ID:
8bc0b78a-c600-4f8d-98f8-66a46c66abe0, Call Stack: null, Custom Event ID:
-1, Message: Migration completed (VM: bc16391ac105b7e68cccb47803906d0b,
Source: compute4, Destination: compute2, Duration: 17 sec).
2014-08-29 09:28:03,551 INFO  [org.ovirt.engine.core.bll.LoginUserCommand]
(ajp--127.0.0.1-8702-1) Running command: LoginUserCommand internal: false.
2014-08-29 09:28:03,553 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand]
(DefaultQuartzScheduler_Worker-55) Lock freed to object EngineLock
[exclusiveLocks= key: 6134b272-cd7f-43c1-a1b1-eaefe69c6b75 value: VM
, sharedLocks= ]
2014-08-29 09:28:03,571 INFO
 [org.ovirt.engine.core.vdsbroker.FailedToRunVmVDSCommand]
(org.ovirt.thread.pool-6-thread-10) START, FailedToRunVmVDSCommand(HostName
= compute3, HostId = fb492af0-3489-4c15-bb9d-6e3829cb536c), log id: 52897093
2014-08-29 09:28:03,572 INFO
 [org.ovirt.engine.core.vdsbroker.FailedToRunVmVDSCommand]
(org.ovirt.thread.pool-6-thread-10) FINISH, FailedToRunVmVDSCommand, log
id: 52897093
2014-08-29 09:28:03,651 INFO  [org.ovirt.engine.core.bll.LogoutUserCommand]
(ajp--127.0.0.1-8702-1) [5c34822e] Running command: LogoutUserCommand
internal: false.
2014-08-29 09:28:03,656 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-1) [5c34822e] Correlation ID: 5c34822e, Call Stack:
null, Custom Event ID: -1, Message: User admin logged out.
2014-08-29 09:28:03,716 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-22) START, DestroyVDSCommand(HostName =
compute4, HostId = 3a7a4504-1434-4fd2-ac00-e8d12c043b37,
vmId=6134b272-cd7f-43c1-a1b1-eaefe69c6b75, force=false, secondsToWait=0,
gracefully=false), log id: 7484552f
2014-08-29 09:28:03,769 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-22) FINISH, DestroyVDSCommand, log id:
7484552f
2014-08-29 09:28:03,770 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-22) RefreshVmList vm id
6134b272-cd7f-43c1-a1b1-eaefe69c6b75 status = Down on vds compute4 ignoring
it in the refresh until migration is done
2014-08-29 09:28:08,470 INFO
 [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-71) START,
GlusterVolumesListVDSCommand(HostName = compute4, HostId =
3a7a4504-1434-4fd2-ac00-e8d12c043b37), log id: 6293b194
2014-08-29 09:28:08,564 INFO  [org.ovirt.engine.core.bll.LoginUserCommand]
(ajp--127.0.0.1-8702-6) Running command: LoginUserCommand internal: false.
2014-08-29 09:28:08,571 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-6) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: User admin logged in.



On Thu, Aug 28, 2014 at 7:48 PM, Omer Frenkel ofren...@redhat.com wrote:



 - Original Message -
  From: Manfred Landauer manfred.landa...@fabasoft.com
  To: users@ovirt.org
  Sent: Thursday, August 14, 2014 6:29:21 PM
  Subject: [ovirt-users] Migration failed due to Error: Fatal error
 during  migration
 
 
 
  Hi all
 
 
 
  When we try to migrate a VM on oVirt “Engine Version: 3.4.3-1.el6 ” form
 host
  A to host B we’ll get this Errormessage: “Migration failed due to Error:
  Fatal error during migration”.
 
 
 
  It looks like, this occurs only when thin provisioned HDD’s attached to
 the
  VM. VM’s with preallocated HDD’s attached, migrate without a problem.
 
 
 
  Hope someone can help us to solve this issue.
 
 

 it looks more like a network error:
 Thread-3810747::ERROR::2014-08-14 16:48:45,471::vm::337::vm.Vm::(run)
 vmId=`494f5edc-7edd-4300-a675-f0a8883265e4`::Failed to migrate
 Traceback (most recent call last):
   File /usr/share/vdsm/vm.py, line 323, in run
 self._startUnderlyingMigration()
   File /usr/share/vdsm/vm.py, line 400, in _startUnderlyingMigration
 None, maxBandwidth)
   File /usr/share/vdsm/vm.py, line 838, in f
 ret = attr(*args, **kwargs)
   File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py,
 line 76, in wrapper
 ret = f(*args, **kwargs)
   File /usr/lib64/python2.6/site-packages/libvirt.py, line 1178, in
 migrateToURI2
 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed',
 dom=self

[ovirt-users] Migration failed, No available host found

2014-08-25 Thread PaulCheung
Dear ALL,
I have 3 servers,   KVM01, KVM02, KVM03I want to migration some vms to KVM02 , 
there show a message:


 Migration failed, No available host found (VM: AL1-Paul, Source: KVM03).




But I can migration from kvm01 to kvm03, or kvm03 to kvm01,  but not kvm02.




I check the firewall, they are all the same!Can somebody help me!




  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed, No available host found

2014-08-25 Thread Omer Frenkel


- Original Message -
 From: PaulCheung eq2...@msn.com
 To: users@ovirt.org
 Sent: Monday, August 25, 2014 2:03:22 PM
 Subject: [ovirt-users] Migration failed, No available host found
 
 Dear ALL,
 
 I have 3 servers, KVM01, KVM02, KVM03
 I want to migration some vms to KVM02 , there show a message:
 
 Migration failed, No available host found (VM: AL1-Paul, Source: KVM03).
 
 
 
 
 But I can migration from kvm01 to kvm03, or kvm03 to kvm01, but not kvm02.
 
 
 
 
 I check the firewall, they are all the same! Can somebody help me!
 
 

are you sure kvm02 has enough resources (cpu/mem) to host the new vm?
can you please attach engine.log


 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Migration failed due to Error: Fatal error during migration

2014-08-14 Thread Landauer, Manfred
Hi all

When we try to migrate a VM on oVirt Engine Version: 3.4.3-1.el6 form host A 
to host B we'll get this Errormessage: Migration failed due to Error: Fatal 
error during migration.

It looks like, this occurs only when thin provisioned HDD's  attached to the 
VM. VM's with preallocated HDD's attached, migrate without a problem.

Hope someone can help us to solve this issue.

Best regards
Manfred



vdsm.log
Description: vdsm.log
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users