[ovirt-users] Migration failed after upgrade engine from 4.3 to 4.4

2023-05-15 Thread Emmanuel Ferrandi

Hi !

When I try to migrate a powered VM (regardless of OS) from one 
hypervisor to another, the VM is immediately shut down with this error 
message:


   /Migration failed: Admin shut down from the engine (VM: VM, Source:
   HP11)./

The oVirt engine has been upgraded from version 4.3 to version 4.4.
Some nodes are in version 4.3 and others in version 4.4.

Here are the oVirt versions for selected hypervisors:

 * HP11 : 4.4
 * HP5 : 4.4
 * HP6 : 4.3

Here are the migration attempts I tried with a powered VM :

 *  From HP > to HP
 * HP6 > HP5 : OK
 * HP6 > HP11 : OK
 * HP5 > HP11 : OK
 * HP5 > HP6 : OK
 * HP11 > HP5 : *NOK*
 * HP11 > HP6 : OK

As mentioned above the migration of a VM between two versions of ovirt 
is not a problem.
The migration of the VM between two HPs with the same 4.4 version works 
only in one direction (HP5 to HP11) and doesn't work in the other way.


I already tried to reinstall both HPs in version 4.4 but without success.

Here are the logs on the HP5 concerning the VM:

   //var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
   (jsonrpc/3) [api.virt] START destroy(gracefulAttempts=1)
   from=:::172.20.3.250,37534, flow_id=43364065,
   vmId=d14f75cd-1cb1-440b-9780-6b6ee78149ac (api:48)//
   ///var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
   (jsonrpc/3) [api] FINISH destroy error=Virtual machine does not
   exist: {'vmId': 'd14f75cd-1cb1-440b-9780-6b6ee78149ac'} (api:129)//
   ///var/log/vdsm/vdsm.log:2023-05-11 14:32:56,303+0200 INFO
   (jsonrpc/3) [api.virt] FINISH destroy return={'status': {'code': 1,
   'message': "Virtual machine does not exist: {'vmId':
   'd14f75cd-1cb1-440b-9780-6b6ee78149ac'}"}}
   from=:::172.20.3.250,37534, flow_id=43364065,
   vmId=d14f75cd-1cb1-440b-9780-6b6ee78149ac (api:54)/

   //var/log/libvirt/qemu/VM.log:2023-03-24 14:56:51.474+:
   initiating migration//
   ///var/log/libvirt/qemu/VM.log:2023-03-24 14:56:54.342+:
   shutting down, reason=migrated//
   ///var/log/libvirt/qemu/VM.log:2023-03-24T14:56:54.870528Z qemu-kvm:
   terminating on signal 15 from pid 4379 ()/

Here are the log on the engine concerning the VM:

   //

   //var/log/ovirt-engine/engine.log:2023-05-11 14:32:53,333+02 INFO 
   [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default
   task-18197) [3f672d7f-f617-47a2-b0e9-c521656e8c01] START,
   MigrateVDSCommand(
   MigrateVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
   vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', srcHost='HP11',
   dstVdsId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
   dstHost='HP5:54321', migrationMethod='ONLINE',
   tunnelMigration='false', migrationDowntime='0', autoConverge='true',
   migrateCompressed='false', migrateEncrypted='null',
   consoleAddress='null', maxBandwidth='256', enableGuestEvents='true',
   maxIncomingMigrations='2', maxOutgoingMigrations='2',
   convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
   stalling=[{limit=1, action={name=setDowntime, params=[150]}},
   {limit=2, action={name=setDowntime, params=[200]}}, {limit=3,
   action={name=setDowntime, params=[300]}}, {limit=4,
   action={name=setDowntime, params=[400]}}, {limit=6,
   action={name=setDowntime, params=[500]}}, {limit=-1,
   action={name=abort, params=[]}}]]', dstQemu='192.168.1.1'}), log id:
   6a3507d0//
   ///var/log/ovirt-engine/engine.log:2023-05-11 14:32:53,334+02 INFO
   [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
   (default task-18197) [3f672d7f-f617-47a2-b0e9-c521656e8c01] START,
   MigrateBrokerVDSCommand(HostName = HP11,
   MigrateVDSCommandParameters:{hostId='6817e182-f163-4a44-9ad6-53156b8bb5a0',
   vmId='d14f75cd-1cb1-440b-9780-6b6ee78149ac', srcHost='HP11',
   dstVdsId='d2481de5-5ad2-4d06-9545-d5628cb87bcb',
   dstHost='HP5:54321', migrationMethod='ONLINE',
   tunnelMigration='false', migrationDowntime='0', autoConverge='true',
   migrateCompressed='false', migrateEncrypted='null',
   consoleAddress='null', maxBandwidth='256', enableGuestEvents='true',
   maxIncomingMigrations='2', maxOutgoingMigrations='2',
   convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
   stalling=[{limit=1, action={name=setDowntime, params=[150]}},
   {limit=2, action={name=setDowntime, params=[200]}}, {limit=3,
   action={name=setDowntime, params=[300]}}, {limit=4,
   action={name=setDowntime, params=[400]}}, {limit=6,
   action={name=setDowntime, params=[500]}}, {limit=-1,
   action={name=abort, params=[]}}]]', dstQemu='192.168.1.1'}), log id:
   f254f72//
   ///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,246+02 INFO 
   [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
   (ForkJoinPool-1-worker-9) [3f0e966d] VM
   'd14f75cd-1cb1-440b-9780-6b6ee78149ac' was reported as Down on VDS
   '6817e182-f163-4a44-9ad6-53156b8bb5a0'(HP11)//
   ///var/log/ovirt-engine/engine.log:2023-05-11 14:32:56,296+02 INFO 
   [org.ovirt.engine.core.bll.SaveVmExternalDataCommand]
   (ForkJoinPool-1-worker-9) [43364065] Running 

[ovirt-users] Migration failed due to an Error: Fatal error during migration

2023-02-12 Thread Anthony Bustillos Gonzalez
Hello, 

I have this issue, when I tried to migrate one VM to another host.

"Migration failed due to an Error: Fatal error during migration "

OS: VERSION=Oracle Linux "8.7"
Qemu: KVM 6.1.1

log 

2023-02-09 11:50:38,991-06 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-264062) [757bd45d] EVENT_ID: 
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed due to an Error: Fatal 
error during migration (VM: Coopavg-BDDESA01, Source: Coopavg-Moscow, 
Destination: Coopavg-Berlin).
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2U7NTEQPB7MODHGD7AGDPP4HJVFFAABO/


[ovirt-users] Migration failed due to an Error: Fatal error during migration

2022-01-24 Thread Gunasekhar Kothapalli via Users

I am able to power on vms on newly upgraded host, But not able to migrate VMs 
from other
hosts to new host or newly upgraded hosts to other hosts. This was worked fine 
before
upgraded.

Host logs
==
Unable to read from monitor: Connection reset by peer
internal error: qemu unexpectedly closed the monitor: 
2022-01-24T17:51:46.598571Z
qemu-kvm: get_pci_config_device: Bad config >
2022-01-24T17:51:46.598627Z qemu-kvm: Failed to load PCIDevice:config
2022-01-24T17:51:46.598635Z qemu-kvm: Failed to load
pcie-root-port:parent_obj.parent_obj.parent_obj
2022-01-24T17:51:46.598642Z qemu-kvm: error while loading state for instance 
0x0 of device
':00:02.0/pcie-root-port'
2022-01-24T17:51:46.598830Z qemu-kvm: load of migration failed: Invalid argument
Guest agent is not responding: QEMU guest agent is not connected
Guest agent is not responding: QEMU guest agent is not connected
Guest agent is not responding: QEMU guest agent is not connected

Engine Logs
===

2022-01-24 11:31:25,080-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-21) [] Adding VM 
'9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) to re-run list
2022-01-24 11:31:25,099-07 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-2331914) [] EVENT_ID: 
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed due to an Error: Fatal 
error during migration (VM: zzz2019, Source: lcoskvmp07.cos.is.keysight.com, 
Destination: lcoskvmp03.cos.is.keysight.com).
2022-01-24 18:39:47,897-07 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-771) [a78e85d4-068a-41c1-a8fa-b3acd8c69317] EVENT_ID: 
VM_MIGRATION_START(62), Migration started (VM: zzz2019, Source: 
lcoskvmp07.cos.is.keysight.com, Destination: lcoskvmp03.cos.is.keysight.com, 
User: k.gunasek...@non.keysight.com@KEYSIGHT).
2022-01-24 18:40:01,417-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-27) [] VM 
'9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) was unexpectedly detected as 
'Down' on VDS 
'ee23b44d-976d-4889-8769-59b56e4b23c0'(lcoskvmp03.cos.is.keysight.com) 
(expected on '0d58953f-b3cc-4bac-b3b2-08ba1bca')
2022-01-24 18:40:01,589-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-27) [] VM 
'9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) was unexpectedly detected as 
'Down' on VDS 
'ee23b44d-976d-4889-8769-59b56e4b23c0'(lcoskvmp03.cos.is.keysight.com) 
(expected on '0d58953f-b3cc-4bac-b3b2-08ba1bca')
2022-01-24 18:40:01,589-07 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-27) [] Migration of VM 'zzz2019' to host 
'lcoskvmp03.cos.is.keysight.com' failed: VM destroyed during the startup.
2022-01-24 18:40:01,591-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-17) [] VM 
'9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) moved from 'MigratingFrom' --> 
'Up'
2022-01-24 18:40:01,591-07 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-17) [] Adding VM 
'9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) to re-run list
2022-01-24 18:40:01,611-07 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-2348837) [] EVENT_ID: 
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed due to an Error: Fatal 
error during migration (VM: zzz2019, Source: lcoskvmp07.cos.is.keysight.com, 
Destination: lcoskvmp03.cos.is.keysight.com).
[root@lcosovirt02 ovirt-engine]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2W5OEBPZM3TLS3P6DXPWWKNU5SPPEGAD/


[ovirt-users] migration failed

2019-01-01 Thread 董青龙
Hi all,
I have an ovirt4.2 environment of 3 hosts. Now all vms in this 
environment could not be migrated. But all the vms could be started on all 3 
hosts. Anyone can help? Thanks a lot!


engine logs:


2019-01-02 09:41:26,868+08 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand] 
(default task-9) [3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[eff7f697-8a07-46e5-a631-a1011a0eb836=VM]', 
sharedLocks=''}'
2019-01-02 09:41:26,978+08 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] Running command: MigrateVmCommand 
internal: false. Entities affected :  ID: eff7f697-8a07-46e5-a631-a1011a0eb836 
Type: VMAction group MIGRATE_VM with role type USER
2019-01-02 09:41:27,019+08 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] START, MigrateVDSCommand( 
MigrateVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e', 
vmId='eff7f697-8a07-46e5-a631-a1011a0eb836', srcHost='horeb66', 
dstVdsId='5bb18f6e-9c7e-4afd-92de-f6482bf752e5', dstHost='horeb65:54321', 
migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', 
autoConverge='true', migrateCompressed='false', consoleAddress='null', 
maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, 
params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, 
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3, 
action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, 
params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, 
action={name=abort, params=[]}}]]', dstQemu='192.168.128.78'}), log id: 1bd72db2
2019-01-02 09:41:27,019+08 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] START, MigrateBrokerVDSCommand(HostName 
= horeb66, 
MigrateVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e', 
vmId='eff7f697-8a07-46e5-a631-a1011a0eb836', srcHost='horeb66', 
dstVdsId='5bb18f6e-9c7e-4afd-92de-f6482bf752e5', dstHost='horeb65:54321', 
migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', 
autoConverge='true', migrateCompressed='false', consoleAddress='null', 
maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, 
params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, 
{limit=2, action={name=setDowntime, params=[200]}}, {limit=3, 
action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, 
params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, 
action={name=abort, params=[]}}]]', dstQemu='192.168.128.78'}), log id: 380b8d38
2019-01-02 09:41:27,025+08 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] FINISH, MigrateBrokerVDSCommand, log id: 
380b8d38
2019-01-02 09:41:27,029+08 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] FINISH, MigrateVDSCommand, return: 
MigratingFrom, log id: 1bd72db2
2019-01-02 09:41:27,036+08 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-168938) 
[3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] EVENT_ID: VM_MIGRATION_START(62), 
Migration started (VM: win7, Source: horeb66, Destination: horeb65, User: 
admin@internal-authz). 
2019-01-02 09:41:41,557+08 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] VM 
'eff7f697-8a07-46e5-a631-a1011a0eb836'(win7) moved from 'MigratingFrom' --> 'Up'
2019-01-02 09:41:41,557+08 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] Adding VM 
'eff7f697-8a07-46e5-a631-a1011a0eb836'(win7) to re-run list
2019-01-02 09:41:41,567+08 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] Rerun VM 
'eff7f697-8a07-46e5-a631-a1011a0eb836'. Called from VDS 'horeb66'
2019-01-02 09:41:41,570+08 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-168945) [] START, 
MigrateStatusVDSCommand(HostName = horeb66, 
MigrateStatusVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e',
 vmId='eff7f697-8a07-46e5-a631-a1011a0eb836'}), log id: 4ed2923c
2019-01-02 09:41:41,573+08 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 

Re: [ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'

2017-03-29 Thread Dan Kenigsberg
Thanks for following up on this. We need to put a little more effort on

Bug 1400952 - [RFE] Resolve listen IP for graphics attached to Open
vSwitch network

so that the hook is no longer needed.

Please let us know how oVirt+OvS is working for you!


On Wed, Mar 29, 2017 at 6:17 PM, Devin A. Bougie
 wrote:
> Just incase anyone else runs into this, you need to set 
> "migration_ovs_hook_enabled=True" in vdsm.conf.  It seems the vdsm.conf 
> created by "hosted-engine --deploy" did not list all of the options, so I 
> overlooked this one.
>
> Thanks for all the help,
> Devin
>
> On Mar 27, 2017, at 11:10 AM, Devin A. Bougie  
> wrote:
>> Hi, All.  We have a new oVirt 4.1.1 cluster up with the OVS switch type.  
>> Everything seems to be working great, except for live migration.
>>
>> I believe the red flag in vdsm.log on the source is:
>> Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)
>>
>> Which results from vdsm assigning an arbitrary bridge name to each ovs 
>> bridge.
>>
>> Please see below for more details on the bridges and excerpts from the logs. 
>>  Any help would be greatly appreciated.
>>
>> Many thanks,
>> Devin
>>
>> SOURCE OVS BRIDGES:
>> # ovs-vsctl show
>> 6d96d9a5-e30d-455b-90c7-9e9632574695
>>Bridge "vdsmbr_QwORbsw2"
>>Port "vdsmbr_QwORbsw2"
>>Interface "vdsmbr_QwORbsw2"
>>type: internal
>>Port "vnet0"
>>Interface "vnet0"
>>Port classepublic
>>Interface classepublic
>>type: internal
>>Port "ens1f0"
>>Interface "ens1f0"
>>Bridge "vdsmbr_9P7ZYKWn"
>>Port ovirtmgmt
>>Interface ovirtmgmt
>>type: internal
>>Port "ens1f1"
>>Interface "ens1f1"
>>Port "vdsmbr_9P7ZYKWn"
>>Interface "vdsmbr_9P7ZYKWn"
>>type: internal
>>ovs_version: "2.7.0"
>>
>> DESTINATION OVS BRIDGES:
>> # ovs-vsctl show
>> f66d765d-712a-4c81-b18e-da1acc9cfdde
>>Bridge "vdsmbr_vdpp0dOd"
>>Port "vdsmbr_vdpp0dOd"
>>Interface "vdsmbr_vdpp0dOd"
>>type: internal
>>Port "ens1f0"
>>Interface "ens1f0"
>>Port classepublic
>>Interface classepublic
>>type: internal
>>Bridge "vdsmbr_3sEwEKd1"
>>Port "vdsmbr_3sEwEKd1"
>>Interface "vdsmbr_3sEwEKd1"
>>type: internal
>>Port "ens1f1"
>>Interface "ens1f1"
>>Port ovirtmgmt
>>Interface ovirtmgmt
>>type: internal
>>ovs_version: "2.7.0"
>>
>>
>> SOURCE VDSM LOG:
>> ...
>> 2017-03-27 10:57:02,567-0400 INFO  (jsonrpc/1) [vdsm.api] START migrate 
>> args=(, {u'incomingLimit': 2, u'src': 
>> u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', 
>> u'tunneled': u'false', u'enableGuestEvents': False, u'dst': 
>> u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': 
>> u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', 
>> u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, 
>> u'method': u'online', 'mode': 'remote'}) kwargs={} (api:37)
>> 2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [vdsm.api] FINISH migrate 
>> return={'status': {'message': 'Migration in progress', 'code': 0}, 
>> 'progress': 0} (api:43)
>> 2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC 
>> call VM.migrate succeeded in 0.01 seconds (__init__:515)
>> 2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
>> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM 
>> took: 0 seconds (migration:455)
>> 2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
>> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to 
>> qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri 
>> tcp://192.168.55.81 (migration:480)
>> 2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
>> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on 
>> 'vdsmbr_QwORbsw2': No such device (migration:287)
>> 2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
>> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate 
>> (migration:429)
>> Traceback (most recent call last):
>>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, 
>> in run
>>self._startUnderlyingMigration(time.time())
>>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, 
>> in _startUnderlyingMigration
>>self._perform_with_downtime_thread(duri, muri)
>>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, 
>> in _perform_with_downtime_thread
>>self._perform_migration(duri, muri)
>>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, 
>> in _perform_migration
>>self._vm._dom.migrateToURI3(duri, params, flags)
>>  File 

Re: [ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'

2017-03-29 Thread Devin A. Bougie
Just incase anyone else runs into this, you need to set 
"migration_ovs_hook_enabled=True" in vdsm.conf.  It seems the vdsm.conf created 
by "hosted-engine --deploy" did not list all of the options, so I overlooked 
this one.

Thanks for all the help,
Devin

On Mar 27, 2017, at 11:10 AM, Devin A. Bougie  wrote:
> Hi, All.  We have a new oVirt 4.1.1 cluster up with the OVS switch type.  
> Everything seems to be working great, except for live migration.
> 
> I believe the red flag in vdsm.log on the source is:
> Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)
> 
> Which results from vdsm assigning an arbitrary bridge name to each ovs bridge.
> 
> Please see below for more details on the bridges and excerpts from the logs.  
> Any help would be greatly appreciated.
> 
> Many thanks,
> Devin
> 
> SOURCE OVS BRIDGES:
> # ovs-vsctl show
> 6d96d9a5-e30d-455b-90c7-9e9632574695
>Bridge "vdsmbr_QwORbsw2"
>Port "vdsmbr_QwORbsw2"
>Interface "vdsmbr_QwORbsw2"
>type: internal
>Port "vnet0"
>Interface "vnet0"
>Port classepublic
>Interface classepublic
>type: internal
>Port "ens1f0"
>Interface "ens1f0"
>Bridge "vdsmbr_9P7ZYKWn"
>Port ovirtmgmt
>Interface ovirtmgmt
>type: internal
>Port "ens1f1"
>Interface "ens1f1"
>Port "vdsmbr_9P7ZYKWn"
>Interface "vdsmbr_9P7ZYKWn"
>type: internal
>ovs_version: "2.7.0"
> 
> DESTINATION OVS BRIDGES:
> # ovs-vsctl show
> f66d765d-712a-4c81-b18e-da1acc9cfdde
>Bridge "vdsmbr_vdpp0dOd"
>Port "vdsmbr_vdpp0dOd"
>Interface "vdsmbr_vdpp0dOd"
>type: internal
>Port "ens1f0"
>Interface "ens1f0"
>Port classepublic
>Interface classepublic
>type: internal
>Bridge "vdsmbr_3sEwEKd1"
>Port "vdsmbr_3sEwEKd1"
>Interface "vdsmbr_3sEwEKd1"
>type: internal
>Port "ens1f1"
>Interface "ens1f1"
>Port ovirtmgmt
>Interface ovirtmgmt
>type: internal
>ovs_version: "2.7.0"
> 
> 
> SOURCE VDSM LOG:
> ...
> 2017-03-27 10:57:02,567-0400 INFO  (jsonrpc/1) [vdsm.api] START migrate 
> args=(, {u'incomingLimit': 2, u'src': 
> u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', 
> u'tunneled': u'false', u'enableGuestEvents': False, u'dst': 
> u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': 
> u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', 
> u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, 
> u'method': u'online', 'mode': 'remote'}) kwargs={} (api:37)
> 2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [vdsm.api] FINISH migrate 
> return={'status': {'message': 'Migration in progress', 'code': 0}, 
> 'progress': 0} (api:43)
> 2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC 
> call VM.migrate succeeded in 0.01 seconds (__init__:515)
> 2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM 
> took: 0 seconds (migration:455)
> 2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to 
> qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri 
> tcp://192.168.55.81 (migration:480)
> 2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on 
> 'vdsmbr_QwORbsw2': No such device (migration:287)
> 2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
> (vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate 
> (migration:429)
> Traceback (most recent call last):
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in 
> run
>self._startUnderlyingMigration(time.time())
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in 
> _startUnderlyingMigration
>self._perform_with_downtime_thread(duri, muri)
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in 
> _perform_with_downtime_thread
>self._perform_migration(duri, muri)
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, in 
> _perform_migration
>self._vm._dom.migrateToURI3(duri, params, flags)
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
>ret = attr(*args, **kwargs)
>  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, 
> in wrapper
>ret = f(*args, **kwargs)
>  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
>return func(inst, *args, **kwargs)
>  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in 
> migrateToURI3
>if ret == -1: raise 

[ovirt-users] migration failed - "Cannot get interface MTU on 'vdsmbr_...'

2017-03-27 Thread Devin A. Bougie
Hi, All.  We have a new oVirt 4.1.1 cluster up with the OVS switch type.  
Everything seems to be working great, except for live migration.

I believe the red flag in vdsm.log on the source is:
Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device (migration:287)

Which results from vdsm assigning an arbitrary bridge name to each ovs bridge.

Please see below for more details on the bridges and excerpts from the logs.  
Any help would be greatly appreciated.

Many thanks,
Devin

SOURCE OVS BRIDGES:
# ovs-vsctl show
6d96d9a5-e30d-455b-90c7-9e9632574695
Bridge "vdsmbr_QwORbsw2"
Port "vdsmbr_QwORbsw2"
Interface "vdsmbr_QwORbsw2"
type: internal
Port "vnet0"
Interface "vnet0"
Port classepublic
Interface classepublic
type: internal
Port "ens1f0"
Interface "ens1f0"
Bridge "vdsmbr_9P7ZYKWn"
Port ovirtmgmt
Interface ovirtmgmt
type: internal
Port "ens1f1"
Interface "ens1f1"
Port "vdsmbr_9P7ZYKWn"
Interface "vdsmbr_9P7ZYKWn"
type: internal
ovs_version: "2.7.0"

DESTINATION OVS BRIDGES:
# ovs-vsctl show
f66d765d-712a-4c81-b18e-da1acc9cfdde
Bridge "vdsmbr_vdpp0dOd"
Port "vdsmbr_vdpp0dOd"
Interface "vdsmbr_vdpp0dOd"
type: internal
Port "ens1f0"
Interface "ens1f0"
Port classepublic
Interface classepublic
type: internal
Bridge "vdsmbr_3sEwEKd1"
Port "vdsmbr_3sEwEKd1"
Interface "vdsmbr_3sEwEKd1"
type: internal
Port "ens1f1"
Interface "ens1f1"
Port ovirtmgmt
Interface ovirtmgmt
type: internal
ovs_version: "2.7.0"


SOURCE VDSM LOG:
...
2017-03-27 10:57:02,567-0400 INFO  (jsonrpc/1) [vdsm.api] START migrate 
args=(, {u'incomingLimit': 2, u'src': 
u'192.168.55.84', u'dstqemu': u'192.168.55.81', u'autoConverge': u'false', 
u'tunneled': u'false', u'enableGuestEvents': False, u'dst': 
u'lnxvirt01-p55.classe.cornell.edu:54321', u'vmId': 
u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', 
u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, u'method': 
u'online', 'mode': 'remote'}) kwargs={} (api:37)
2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [vdsm.api] FINISH migrate 
return={'status': {'message': 'Migration in progress', 'code': 0}, 'progress': 
0} (api:43)
2017-03-27 10:57:02,570-0400 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
VM.migrate succeeded in 0.01 seconds (__init__:515)
2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM took: 
0 seconds (migration:455)
2017-03-27 10:57:03,028-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to 
qemu+tls://lnxvirt01-p55.classe.cornell.edu/system with miguri 
tcp://192.168.55.81 (migration:480)
2017-03-27 10:57:03,224-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Cannot get interface MTU on 
'vdsmbr_QwORbsw2': No such device (migration:287)
2017-03-27 10:57:03,322-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate (migration:429)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in 
run
self._startUnderlyingMigration(time.time())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in 
_startUnderlyingMigration
self._perform_with_downtime_thread(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in 
_perform_with_downtime_thread
self._perform_migration(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, in 
_perform_migration
self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, 
in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in 
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)
libvirtError: Cannot get interface MTU on 'vdsmbr_QwORbsw2': No such device
2017-03-27 10:57:03,435-0400 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:33716 
(protocoldetector:72)
2017-03-27 10:57:03,452-0400 INFO  (Reactor thread) [ProtocolDetector.Detector] 
Detected protocol stomp from ::1:33716 (protocoldetector:127)
2017-03-27 10:57:03,452-0400 INFO  (Reactor thread) [Broker.StompAdapter] 
Processing CONNECT 

Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Vinzenz Feenstra

> On Jun 17, 2016, at 12:47 PM, Vinzenz Feenstra  wrote:
> 
>> 
>> On Jun 17, 2016, at 12:42 PM, Michal Skrivanek > > wrote:
>> 
>> 
>>> On 17 Jun 2016, at 12:37, Fabrice Bacchella >> > wrote:
>>> 
>>> 
 Le 17 juin 2016 à 12:33, Vinzenz Feenstra > a écrit :
 
 
> On Jun 17, 2016, at 12:12 PM, Fabrice Bacchella 
> > 
> wrote:
> 
> 
>> Le 17 juin 2016 à 12:05, Vinzenz Feenstra > > a écrit :
>> 
>> Hi Fabrice,
>> 
>>> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
>>> > 
>>> wrote:
>>> 
>>> I'm running an up to date ovirt setup.
>>> 
>>> I tried to put an host in maintenance mode, with one VM running on it.
>>> 
>>> It failed with this message in vdsm.log:
>>> 
> 
>>> libvirtError: internal error: process exited while connecting to 
>>> monitor: 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
>>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>>>  Failed to bind socket to 
>>> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>>>  Permission denied
>> 
>> This is pretty odd, could you please send me the out put of this:
>> 
>> # rpm -qa | grep vdsm
>> 
>> From the target and destination hosts. Thanks.
> 
>>> 
 
 Thanks.
 
 And on the destination server what are the access rights on 
 /var/lib/libvirt/qemu/channels? 
>>> On both:
>>> drwxrwxr-x 2 vdsm qemu 137 Jun 14 15:35 /var/lib/libvirt/qemu/channels
>>> drwxrwxr-x 2 vdsm qemu 6 May 24 16:03 /var/lib/libvirt/qemu/channels
>>> 
 And if you have SELinux enabled can you temporary set it to permissive on 
 the destination and try to migrate?
>>> 
>>> SELinux is disabled on both.
>> 
>> And was the VM started in the same SELinux state or did you change it 
>> afterwards while it was running?
> 
> It is disabled since installation (We moved the conversation for now to the 
> IRC) 
> 
> If we found a solution / reason I will respond to the thread to have it 
> documented.

So the reason for the errors is wrongly set ownership of the 
/var/lib/libvirt/qemu folder rwxr-x--x 8 oneadmin oneadmin 


> 
>> 
>>> 
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users 
>>> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Vinzenz Feenstra

> On Jun 17, 2016, at 12:42 PM, Michal Skrivanek  
> wrote:
> 
> 
>> On 17 Jun 2016, at 12:37, Fabrice Bacchella > > wrote:
>> 
>> 
>>> Le 17 juin 2016 à 12:33, Vinzenz Feenstra >> > a écrit :
>>> 
>>> 
 On Jun 17, 2016, at 12:12 PM, Fabrice Bacchella 
 > wrote:
 
 
> Le 17 juin 2016 à 12:05, Vinzenz Feenstra  > a écrit :
> 
> Hi Fabrice,
> 
>> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
>> > 
>> wrote:
>> 
>> I'm running an up to date ovirt setup.
>> 
>> I tried to put an host in maintenance mode, with one VM running on it.
>> 
>> It failed with this message in vdsm.log:
>> 
 
>> libvirtError: internal error: process exited while connecting to 
>> monitor: 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>>  Failed to bind socket to 
>> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>>  Permission denied
> 
> This is pretty odd, could you please send me the out put of this:
> 
> # rpm -qa | grep vdsm
> 
> From the target and destination hosts. Thanks.
 
>> 
>>> 
>>> Thanks.
>>> 
>>> And on the destination server what are the access rights on 
>>> /var/lib/libvirt/qemu/channels? 
>> On both:
>> drwxrwxr-x 2 vdsm qemu 137 Jun 14 15:35 /var/lib/libvirt/qemu/channels
>> drwxrwxr-x 2 vdsm qemu 6 May 24 16:03 /var/lib/libvirt/qemu/channels
>> 
>>> And if you have SELinux enabled can you temporary set it to permissive on 
>>> the destination and try to migrate?
>> 
>> SELinux is disabled on both.
> 
> And was the VM started in the same SELinux state or did you change it 
> afterwards while it was running?

It is disabled since installation (We moved the conversation for now to the 
IRC) 

If we found a solution / reason I will respond to the thread to have it 
documented.

> 
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Michal Skrivanek

> On 17 Jun 2016, at 12:37, Fabrice Bacchella  
> wrote:
> 
> 
>> Le 17 juin 2016 à 12:33, Vinzenz Feenstra > > a écrit :
>> 
>> 
>>> On Jun 17, 2016, at 12:12 PM, Fabrice Bacchella 
>>> > wrote:
>>> 
>>> 
 Le 17 juin 2016 à 12:05, Vinzenz Feenstra > a écrit :
 
 Hi Fabrice,
 
> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
> > 
> wrote:
> 
> I'm running an up to date ovirt setup.
> 
> I tried to put an host in maintenance mode, with one VM running on it.
> 
> It failed with this message in vdsm.log:
> 
>>> 
> libvirtError: internal error: process exited while connecting to monitor: 
> 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>  Failed to bind socket to 
> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>  Permission denied
 
 This is pretty odd, could you please send me the out put of this:
 
 # rpm -qa | grep vdsm
 
 From the target and destination hosts. Thanks.
>>> 
> 
>> 
>> Thanks.
>> 
>> And on the destination server what are the access rights on 
>> /var/lib/libvirt/qemu/channels? 
> On both:
> drwxrwxr-x 2 vdsm qemu 137 Jun 14 15:35 /var/lib/libvirt/qemu/channels
> drwxrwxr-x 2 vdsm qemu 6 May 24 16:03 /var/lib/libvirt/qemu/channels
> 
>> And if you have SELinux enabled can you temporary set it to permissive on 
>> the destination and try to migrate?
> 
> SELinux is disabled on both.

And was the VM started in the same SELinux state or did you change it 
afterwards while it was running?

> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Fabrice Bacchella

> Le 17 juin 2016 à 12:33, Vinzenz Feenstra  a écrit :
> 
> 
>> On Jun 17, 2016, at 12:12 PM, Fabrice Bacchella 
>> > wrote:
>> 
>> 
>>> Le 17 juin 2016 à 12:05, Vinzenz Feenstra >> > a écrit :
>>> 
>>> Hi Fabrice,
>>> 
 On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
 > wrote:
 
 I'm running an up to date ovirt setup.
 
 I tried to put an host in maintenance mode, with one VM running on it.
 
 It failed with this message in vdsm.log:
 
>> 
 libvirtError: internal error: process exited while connecting to monitor: 
 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
 socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
  Failed to bind socket to 
 /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
  Permission denied
>>> 
>>> This is pretty odd, could you please send me the out put of this:
>>> 
>>> # rpm -qa | grep vdsm
>>> 
>>> From the target and destination hosts. Thanks.
>> 

> 
> Thanks.
> 
> And on the destination server what are the access rights on 
> /var/lib/libvirt/qemu/channels? 
On both:
drwxrwxr-x 2 vdsm qemu 137 Jun 14 15:35 /var/lib/libvirt/qemu/channels
drwxrwxr-x 2 vdsm qemu 6 May 24 16:03 /var/lib/libvirt/qemu/channels

> And if you have SELinux enabled can you temporary set it to permissive on the 
> destination and try to migrate?

SELinux is disabled on both.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Vinzenz Feenstra

> On Jun 17, 2016, at 12:12 PM, Fabrice Bacchella 
>  wrote:
> 
> 
>> Le 17 juin 2016 à 12:05, Vinzenz Feenstra > > a écrit :
>> 
>> Hi Fabrice,
>> 
>>> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
>>> > wrote:
>>> 
>>> I'm running an up to date ovirt setup.
>>> 
>>> I tried to put an host in maintenance mode, with one VM running on it.
>>> 
>>> It failed with this message in vdsm.log:
>>> 
> 
>>> libvirtError: internal error: process exited while connecting to monitor: 
>>> 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
>>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>>>  Failed to bind socket to 
>>> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>>>  Permission denied
>> 
>> This is pretty odd, could you please send me the out put of this:
>> 
>> # rpm -qa | grep vdsm
>> 
>> From the target and destination hosts. Thanks.
> 
> On the host I was trying to put on maintenance:
> vdsm-xmlrpc-4.17.28-0.el7.centos.noarch
> vdsm-4.17.28-0.el7.centos.noarch
> vdsm-infra-4.17.28-0.el7.centos.noarch
> vdsm-yajsonrpc-4.17.28-0.el7.centos.noarch
> vdsm-python-4.17.28-0.el7.centos.noarch
> vdsm-jsonrpc-4.17.28-0.el7.centos.noarch
> vdsm-hook-vmfex-dev-4.17.28-0.el7.centos.noarch
> vdsm-cli-4.17.28-0.el7.centos.noarch
> 
> And it was trying to send to an host with:
> vdsm-yajsonrpc-4.17.28-1.el7.noarch
> vdsm-cli-4.17.28-1.el7.noarch
> vdsm-python-4.17.28-1.el7.noarch
> vdsm-hook-vmfex-dev-4.17.28-1.el7.noarch
> vdsm-xmlrpc-4.17.28-1.el7.noarch
> vdsm-4.17.28-1.el7.noarch
> vdsm-infra-4.17.28-1.el7.noarch
> vdsm-jsonrpc-4.17.28-1.el7.noarch
> 
> And in the log about that:
> jsonrpc.Executor/1::DEBUG::2016-06-17 
> 11:39:57,233::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 
> 'VM.migrate' in bridge with {u'params': {u
> 'tunneled': u'false', u'dstqemu': u'XX.XX.XX.28', u'autoConverge': u'false', 
> u'src': u'nb0101.XXX', u'dst': u'nb0105.XXX:54321', u
> 'vmId': u'b82209c9-42ff-457c-bb98-b6a2034833fc', u'abortOnError': u'true', 
> u'compressed': u'false', u'method': u'online'}, u'vmID': 
> u'b82209c9-42ff-457c-bb9
> 8-b6a2034833fc'}
> jsonrpc.Executor/1::DEBUG::2016-06-17 11:39:57,234::API::547::vds::(migrate) 
> {u'tunneled': u'false', u'dstqemu': u'XX.XX.XX.28', u'autoConverge': 
> u'false', 
> u'src': u'nb0101.XXX', u'dst': u'nb0105.XXX:54321', u'vmId': 
> u'b82209c9-42ff-457c-bb98-b6a2034833fc', u'abortOnError': u'true', u'
> compressed': u'false', u'method': u'online’}

Thanks.

And on the destination server what are the access rights on 
/var/lib/libvirt/qemu/channels? 
And if you have SELinux enabled can you temporary set it to permissive on the 
destination and try to migrate?


> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Fabrice Bacchella

> Le 17 juin 2016 à 12:05, Vinzenz Feenstra  a écrit :
> 
> Hi Fabrice,
> 
>> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
>> > wrote:
>> 
>> I'm running an up to date ovirt setup.
>> 
>> I tried to put an host in maintenance mode, with one VM running on it.
>> 
>> It failed with this message in vdsm.log:
>> 

>> libvirtError: internal error: process exited while connecting to monitor: 
>> 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>>  Failed to bind socket to 
>> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>>  Permission denied
> 
> This is pretty odd, could you please send me the out put of this:
> 
> # rpm -qa | grep vdsm
> 
> From the target and destination hosts. Thanks.

On the host I was trying to put on maintenance:
vdsm-xmlrpc-4.17.28-0.el7.centos.noarch
vdsm-4.17.28-0.el7.centos.noarch
vdsm-infra-4.17.28-0.el7.centos.noarch
vdsm-yajsonrpc-4.17.28-0.el7.centos.noarch
vdsm-python-4.17.28-0.el7.centos.noarch
vdsm-jsonrpc-4.17.28-0.el7.centos.noarch
vdsm-hook-vmfex-dev-4.17.28-0.el7.centos.noarch
vdsm-cli-4.17.28-0.el7.centos.noarch

And it was trying to send to an host with:
vdsm-yajsonrpc-4.17.28-1.el7.noarch
vdsm-cli-4.17.28-1.el7.noarch
vdsm-python-4.17.28-1.el7.noarch
vdsm-hook-vmfex-dev-4.17.28-1.el7.noarch
vdsm-xmlrpc-4.17.28-1.el7.noarch
vdsm-4.17.28-1.el7.noarch
vdsm-infra-4.17.28-1.el7.noarch
vdsm-jsonrpc-4.17.28-1.el7.noarch

And in the log about that:
jsonrpc.Executor/1::DEBUG::2016-06-17 
11:39:57,233::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 
'VM.migrate' in bridge with {u'params': {u
'tunneled': u'false', u'dstqemu': u'XX.XX.XX.28', u'autoConverge': u'false', 
u'src': u'nb0101.XXX', u'dst': u'nb0105.XXX:54321', u
'vmId': u'b82209c9-42ff-457c-bb98-b6a2034833fc', u'abortOnError': u'true', 
u'compressed': u'false', u'method': u'online'}, u'vmID': 
u'b82209c9-42ff-457c-bb9
8-b6a2034833fc'}
jsonrpc.Executor/1::DEBUG::2016-06-17 11:39:57,234::API::547::vds::(migrate) 
{u'tunneled': u'false', u'dstqemu': u'XX.XX.XX.28', u'autoConverge': u'false', 
u'src': u'nb0101.XXX', u'dst': u'nb0105.XXX:54321', u'vmId': 
u'b82209c9-42ff-457c-bb98-b6a2034833fc', u'abortOnError': u'true', u'
compressed': u'false', u'method': u'online'}

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed with permission denied

2016-06-17 Thread Vinzenz Feenstra
Hi Fabrice,

> On Jun 17, 2016, at 11:41 AM, Fabrice Bacchella 
>  wrote:
> 
> I'm running an up to date ovirt setup.
> 
> I tried to put an host in maintenance mode, with one VM running on it.
> 
> It failed with this message in vdsm.log:
> 
> Thread-351083::ERROR::2016-06-17 
> 11:30:04,732::migration::209::virt.vm::(_recover) 
> vmId=`b82209c9-42ff-457c-bb98-b6a2034833fc`::internal error: process exited 
> while connecting to monitor: 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>  Failed to bind socket to 
> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>  Permission denied
> ...
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/migration.py", line 298, in run
> self._startUnderlyingMigration(time.time())
>   File "/usr/share/vdsm/virt/migration.py", line 364, in 
> _startUnderlyingMigration
> self._perform_migration(duri, muri)
>   File "/usr/share/vdsm/virt/migration.py", line 403, in _perform_migration
> self._vm._dom.migrateToURI3(duri, params, flags)
>   File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
> 124, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836, in 
> migrateToURI3
> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
> dom=self)
> libvirtError: internal error: process exited while connecting to monitor: 
> 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
>  Failed to bind socket to 
> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
>  Permission denied

This is pretty odd, could you please send me the out put of this:

# rpm -qa | grep vdsm

From the target and destination hosts. Thanks.

> 
> If i check the file, I see :
> 
> srwxrwxr-x 1 qemu qemu 0 May 31 16:21 
> /var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm
> 
> And on all my hosts, the permissions are the same:
> srwxrwxr-x 1 qemu qemu /var/lib/libvirt/qemu/channels/*
> 
> And vdsm is running vdsm:
> 4 S vdsm  3816 1  0  60 -20 - 947345 poll_s May25 ?   02:21:58 
> /usr/bin/python /usr/share/vdsm/vdsm
> 
> If I check vdsm groups:
> ~# id vdsm
> uid=36(vdsm) gid=36(kvm) groups=36(kvm),179(sanlock),107(qemu)
> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] migration failed with permission denied

2016-06-17 Thread Fabrice Bacchella
I'm running an up to date ovirt setup.

I tried to put an host in maintenance mode, with one VM running on it.

It failed with this message in vdsm.log:

Thread-351083::ERROR::2016-06-17 
11:30:04,732::migration::209::virt.vm::(_recover) 
vmId=`b82209c9-42ff-457c-bb98-b6a2034833fc`::internal error: process exited 
while connecting to monitor: 2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
 Failed to bind socket to 
/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
 Permission denied
...
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/migration.py", line 298, in run
self._startUnderlyingMigration(time.time())
  File "/usr/share/vdsm/virt/migration.py", line 364, in 
_startUnderlyingMigration
self._perform_migration(duri, muri)
  File "/usr/share/vdsm/virt/migration.py", line 403, in _perform_migration
self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124, 
in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836, in 
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)
libvirtError: internal error: process exited while connecting to monitor: 
2016-06-17T09:30:04.429323Z qemu-kvm: -chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm,server,nowait:
 Failed to bind socket to 
/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm:
 Permission denied

If i check the file, I see :

srwxrwxr-x 1 qemu qemu 0 May 31 16:21 
/var/lib/libvirt/qemu/channels/b82209c9-42ff-457c-bb98-b6a2034833fc.com.redhat.rhevm.vdsm

And on all my hosts, the permissions are the same:
srwxrwxr-x 1 qemu qemu /var/lib/libvirt/qemu/channels/*

And vdsm is running vdsm:
4 S vdsm  3816 1  0  60 -20 - 947345 poll_s May25 ?   02:21:58 
/usr/bin/python /usr/share/vdsm/vdsm

If I check vdsm groups:
~# id vdsm
uid=36(vdsm) gid=36(kvm) groups=36(kvm),179(sanlock),107(qemu)




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed no available host found ....

2016-02-15 Thread Michal Skrivanek

> On 15 Feb 2016, at 12:00, Jean-Pierre Ribeauville <jpribeauvi...@axway.com> 
> wrote:
> 
> Hi,
>  
> You hit the target !!!
>  
> I enable overcommitting on the destination , then I’m able to migrate towards 
> it.
>  
> Now I’ve to clarify my  Guests memory requirements.
>  
> Thx for your help.
>  
> Regards,
>  
> J.P.
>  
> _
> De : Jean-Pierre Ribeauville 
> Envoyé : lundi 15 février 2016 11:03
> À : 'ILanit Stein'
> Cc : users@ovirt.org <mailto:users@ovirt.org>
> Objet : RE: [ovirt-users] migration failed no available host found 
>  
>  
> Hi,
>  
> Within the ovirt GUI , I got this :
>  
> Max free Memory for scheduling new VMs : 0 Mb
>  
> It seems to be the root cause of my issue .
>  
> Vmstat -s ran on the destination host returns :
>  
> [root@ldc01omv01 vdsm]# vmstat -s
>  49182684 K total memory
>   4921536 K used memory
>   5999188 K active memory
>   1131436 K inactive memory
>  39891992 K free memory
>  2344 K buffer memory
>   4366812 K swap cache
>  24707068 K total swap
> 0 K used swap
>  24707068 K free swap
>   3090822 non-nice user cpu ticks
>  8068 nice user cpu ticks
>   2637035 system cpu ticks
> 804915819 idle cpu ticks
>298074 IO-wait cpu ticks
> 6 IRQ cpu ticks
>  5229 softirq cpu ticks
> 0 stolen cpu ticks
>  58678411 pages paged in
>  78586581 pages paged out
> 0 pages swapped in
> 0 pages swapped out
> 541412845 interrupts
>1224374736 CPU context switches
>1455276687 boot time
>476762 forks
> [root@ldc01omv01 vdsm]#
>  
>  
> Is it vdsm that returns this info to ovirt ?
>  
> I tried a migration this morning at 10/04.
>  
> I attached  vdsm destination log .
>  
> << Fichier: vdsm.log >> 
>  
> Is it worth to increase of level of destination log ?
>  
>  
> Thx for help.
>  
> Regards,
>  
> J.P.
>  
> -Message d'origine-
> De : ILanit Stein [mailto:ist...@redhat.com <mailto:ist...@redhat.com>] 
> Envoyé : dimanche 14 février 2016 10:05
> À : Jean-Pierre Ribeauville
> Cc : users@ovirt.org <mailto:users@ovirt.org>
> Objet : Re: [ovirt-users] migration failed no available host found 
>  
> Hi Jean-Pierre,
>  
> Seems by the log you've sent that the destination host, ldc01omv01, is 
> filtered out, cause of lake of memory.
> Is there enough memory on the destination, to run this VM?
>  
> Would you please send the source/destination hosts /var/log/vdsm/vdsm.log, 
> /var/log/libvirt/qemu/VM_RHEL7-2, and /var/log/vdsm/libvirt.log, to provide 
> more details.
>  
> Thanks,
> Ilanit.
>  
> - Original Message -
> From: "Jean-Pierre Ribeauville" <jpribeauvi...@axway.com 
> <mailto:jpribeauvi...@axway.com>>
> To: users@ovirt.org <mailto:users@ovirt.org>
> Sent: Friday, February 12, 2016 4:59:20 PM
> Subject: [ovirt-users] migration failed no available host found 
>  
>  
>  
> Hi, 
>  
>  
>  
> When trying to migrate a Guest between two nodes of a cluster (from node1 to 
> ldc01omv01) , I got this error ( in ovirt/engine.log file) : 
>  
>  
>  
> 2016-02-12 15:05:31,485 INFO 
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
> (ajp-/127.0.0.1:8702-4) [ff31b86] Candidate host ldc01omv01 
> (09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
> VAR__FILTERTYPE__INTERNAL filter Memory 
>  
> 2016-02-12 15:05:31,495 DEBUG 
> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
> (org.ovirt.thread.pool-7-thread-34) About to run task 
> java.util.concurrent.FutureTask from : java.lang.Exception 
>  
> at 
> org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.beforeExecute(ThreadPoolUtil.java:52)
>  [utils.jar:] 
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [rt.jar:1.7.0_85] 
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [rt.jar:1.7.0_85] 
>  
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_85] 
>  
>  
>  
> 2016-02-12 15:05:31,502 INFO 
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
> (org.ovirt.thread.pool-7-thread-34) [ff31b86] Running command: 
> MigrateVmToServerCommand internal: false. Entities affected : ID: 
> b77e6171-cbdf-44cb-b851-1b776a3fb616 Type: VMAction group MIGRATE_VM with 
> role type USER 
>  
> 2016-02-12 15:05:31,505 INFO 
> [org.ovirt.engine.cor

Re: [ovirt-users] migration failed no available host found ....

2016-02-15 Thread Jean-Pierre Ribeauville
Hi,

You hit the target !!!

I enable overcommitting on the destination , then  I’m able to migrate towards 
it.

Now I’ve to clarify my  Guests  memory requirements.

Thx for your help.

Regards,

J.P.

_
De : Jean-Pierre Ribeauville
Envoyé : lundi 15 février 2016 11:03
À : 'ILanit Stein'
Cc : users@ovirt.org
Objet : RE: [ovirt-users] migration failed no available host found 


Hi,

Within the ovirt GUI , I got this :

Max free Memory for scheduling new VMs : 0 Mb

It seems to be the root cause of my issue .

Vmstat -s ran on the destination host returns :

[root@ldc01omv01 vdsm]# vmstat -s
 49182684 K total memory
  4921536 K used memory
  5999188 K active memory
  1131436 K inactive memory
 39891992 K free memory
 2344 K buffer memory
  4366812 K swap cache
 24707068 K total swap
0 K used swap
 24707068 K free swap
  3090822 non-nice user cpu ticks
 8068 nice user cpu ticks
  2637035 system cpu ticks
804915819 idle cpu ticks
   298074 IO-wait cpu ticks
6 IRQ cpu ticks
 5229 softirq cpu ticks
0 stolen cpu ticks
 58678411 pages paged in
 78586581 pages paged out
0 pages swapped in
0 pages swapped out
541412845 interrupts
   1224374736 CPU context switches
   1455276687 boot time
   476762 forks
[root@ldc01omv01 vdsm]#


Is it vdsm that returns this info to ovirt ?

I tried a migration this morning at 10/04.

I attached  vdsm destination log .

 << Fichier: vdsm.log >>

Is it worth to increase of level of destination log ?


Thx for help.

Regards,

J.P.

-Message d'origine-
De : ILanit Stein [mailto:ist...@redhat.com]
Envoyé : dimanche 14 février 2016 10:05
À : Jean-Pierre Ribeauville
Cc : users@ovirt.org
Objet : Re: [ovirt-users] migration failed no available host found 

Hi Jean-Pierre,

Seems by the log you've sent that the destination host, ldc01omv01, is filtered 
out, cause of lake of memory.
Is there enough memory on the destination, to run this VM?

Would you please send the source/destination hosts /var/log/vdsm/vdsm.log, 
/var/log/libvirt/qemu/VM_RHEL7-2, and /var/log/vdsm/libvirt.log, to provide 
more details.

Thanks,
Ilanit.

- Original Message -
From: "Jean-Pierre Ribeauville" 
<jpribeauvi...@axway.com<mailto:jpribeauvi...@axway.com>>
To: users@ovirt.org<mailto:users@ovirt.org>
Sent: Friday, February 12, 2016 4:59:20 PM
Subject: [ovirt-users] migration failed no available host found 



Hi,



When trying to migrate a Guest between two nodes of a cluster (from node1 to 
ldc01omv01) , I got this error ( in ovirt/engine.log file) :



2016-02-12 15:05:31,485 INFO 
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(ajp-/127.0.0.1:8702-4) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory

2016-02-12 15:05:31,495 DEBUG 
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
(org.ovirt.thread.pool-7-thread-34) About to run task 
java.util.concurrent.FutureTask from : java.lang.Exception

at 
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.beforeExecute(ThreadPoolUtil.java:52)
 [utils.jar:]

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[rt.jar:1.7.0_85]

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[rt.jar:1.7.0_85]

at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_85]



2016-02-12 15:05:31,502 INFO 
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Running command: 
MigrateVmToServerCommand internal: false. Entities affected : ID: 
b77e6171-cbdf-44cb-b851-1b776a3fb616 Type: VMAction group MIGRATE_VM with role 
type USER

2016-02-12 15:05:31,505 INFO 
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory (correlation id: ff31b86)

2016-02-12 15:05:31,509 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Correlation ID: ff31b86, Job ID: 
7b917604-f487-43a3-9cd2-4e7f95e545cc, Call Stack: null, Custom Event ID: -1, 
Message: Migration failed, No available host found (VM: VM_RHEL7-2, Source: 
node1).





In ovirt GUI nothing strange .



How may I go further to investigate this issue ?





Thx for help.



Regards,






J.P. Ribeauville




P: +33.(0).1.47.17.20.49

.

Puteaux 3 Etage 5 Bureau 4



jpribeauvi...@axway.com<mailto:jpribeauvi...@axway.com>
http://www.axway.com






P Pensez à l’environnement avant d’imprimer.





___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.o

Re: [ovirt-users] migration failed no available host found ....

2016-02-14 Thread ILanit Stein
Hi Jean-Pierre,

Seems by the log you've sent that the destination host, ldc01omv01, is filtered 
out, cause of lake of memory.
Is there enough memory on the destination, to run this VM?

Would you please send the source/destination hosts /var/log/vdsm/vdsm.log, 
/var/log/libvirt/qemu/VM_RHEL7-2, and /var/log/vdsm/libvirt.log,
to provide more details.

Thanks,
Ilanit.

- Original Message -
From: "Jean-Pierre Ribeauville" <jpribeauvi...@axway.com>
To: users@ovirt.org
Sent: Friday, February 12, 2016 4:59:20 PM
Subject: [ovirt-users] migration failed no available host found 



Hi, 



When trying to migrate a Guest between two nodes of a cluster (from node1 to 
ldc01omv01) , I got this error ( in ovirt/engine.log file) : 



2016-02-12 15:05:31,485 INFO 
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(ajp-/127.0.0.1:8702-4) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory 

2016-02-12 15:05:31,495 DEBUG 
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
(org.ovirt.thread.pool-7-thread-34) About to run task 
java.util.concurrent.FutureTask from : java.lang.Exception 

at 
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.beforeExecute(ThreadPoolUtil.java:52)
 [utils.jar:] 

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[rt.jar:1.7.0_85] 

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[rt.jar:1.7.0_85] 

at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_85] 



2016-02-12 15:05:31,502 INFO 
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Running command: 
MigrateVmToServerCommand internal: false. Entities affected : ID: 
b77e6171-cbdf-44cb-b851-1b776a3fb616 Type: VMAction group MIGRATE_VM with role 
type USER 

2016-02-12 15:05:31,505 INFO 
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory (correlation id: ff31b86) 

2016-02-12 15:05:31,509 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Correlation ID: ff31b86, Job ID: 
7b917604-f487-43a3-9cd2-4e7f95e545cc, Call Stack: null, Custom Event ID: -1, 
Message: Migration failed, No available host found (VM: VM_RHEL7-2, Source: 
node1). 





In ovirt GUI nothing strange . 



How may I go further to investigate this issue ? 





Thx for help. 



Regards, 






J.P. Ribeauville 




P: +33.(0).1.47.17.20.49 

. 

Puteaux 3 Etage 5 Bureau 4 



jpribeauvi...@axway.com 
http://www.axway.com 






P Pensez à l’environnement avant d’imprimer. 





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] migration failed no available host found ....

2016-02-12 Thread Jean-Pierre Ribeauville
Hi,

When trying  to migrate a Guest between two nodes of a cluster  (from node1 to 
ldc01omv01),  I got this error ( in ovirt/engine.log file) :

2016-02-12 15:05:31,485 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(ajp-/127.0.0.1:8702-4) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory
2016-02-12 15:05:31,495 DEBUG 
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
(org.ovirt.thread.pool-7-thread-34) About to run task 
java.util.concurrent.FutureTask from : java.lang.Exception
at 
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.beforeExecute(ThreadPoolUtil.java:52)
 [utils.jar:]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[rt.jar:1.7.0_85]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[rt.jar:1.7.0_85]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_85]

2016-02-12 15:05:31,502 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Running command: 
MigrateVmToServerCommand internal: false. Entities affected :  ID: 
b77e6171-cbdf-44cb-b851-1b776a3fb616 Type: VMAction group MIGRATE_VM with role 
type USER
2016-02-12 15:05:31,505 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory (correlation id: ff31b86)
2016-02-12 15:05:31,509 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Correlation ID: ff31b86, Job ID: 
7b917604-f487-43a3-9cd2-4e7f95e545cc, Call Stack: null, Custom Event ID: -1, 
Message: Migration failed, No available host found (VM: VM_RHEL7-2, Source: 
node1).


In ovirt GUI nothing strange .

How may I go further to investigate this issue ?


Thx for help.

Regards,


J.P. Ribeauville


P: +33.(0).1.47.17.20.49
.
Puteaux 3 Etage 5  Bureau 4

jpribeauvi...@axway.com
http://www.axway.com



P Pensez à l'environnement avant d'imprimer.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed

2015-12-10 Thread Yaniv Dary
We need logs to help, please attach them.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Wed, Dec 9, 2015 at 2:07 PM, Massimo Mad  wrote:

> Hi Michal,
> This is my configuration end the error:
>
> 1 start to migrate the vm from cluster in centos 6.x to cluster un centos
> bare-metal 7.x
> Migration started (VM: Spacewalk, Source: ovirtxx3, Destination: ovirtxx5,
> User: admin@internal).
> 2 first error:Migration failed due to Error: Fatal error during
> migration. Trying to migrate to another Host (VM: Spacewalkp, Source:
> ovirtxx03, Destination: ovirtxx05).
> 3 Second error: Migration failed, No available host found (VM: Spacewalk,
> Source: ovirtxx3).
>
>
> Regards
> Massimo
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Migration failed

2015-12-09 Thread Massimo Mad
Hi Michal,
This is my configuration end the error:

1 start to migrate the vm from cluster in centos 6.x to cluster un centos
bare-metal 7.x
Migration started (VM: Spacewalk, Source: ovirtxx3, Destination: ovirtxx5,
User: admin@internal).
2 first error:Migration failed due to Error: Fatal error during
migration. Trying to migrate to another Host (VM: Spacewalkp, Source:
ovirtxx03, Destination: ovirtxx05).
3 Second error: Migration failed, No available host found (VM: Spacewalk,
Source: ovirtxx3).


Regards
Massimo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed

2015-12-06 Thread Michal Skrivanek


> On 04 Dec 2015, at 18:56, Massimo Mad  wrote:
> 
> Hi,
> I want to upgrade my oVirt infrastructure, host on host centos 6.x on bare 
> metal 7.x.
> I created a new cluster with inside the new host, and when I try to migrate 
> the vm from one cluster to another I have the following messages:

Cross cluster migration is for el6 to el7 upgrade only, one way. 

> Migration failed, No available hosts found
> Migration failed two to Error: Fatal Error during migration. Trying to 
> migrate to another Host

Please describe your steps, setup, and errors in more detail

Thanks,
michal

> I checked the host file and the certificates and everything is fine
> Regards
> Massimo
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Migration failed

2015-12-04 Thread Massimo Mad
Hi,
I want to upgrade my oVirt infrastructure, host on host centos 6.x on bare
metal 7.x.
I created a new cluster with inside the new host, and when I try to migrate
the vm from one cluster to another I have the following messages:
Migration failed, No available hosts found
Migration failed two to Error: Fatal Error during migration. Trying to
migrate to another Host
I checked the host file and the certificates and everything is fine
Regards
Massimo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed, No available host found

2015-04-06 Thread Artyom Lukianov
Engine try to migrate vm on some available host, but migration failed, so 
engine try another host. From some reason migration failed on all hosts:
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command 
MigrateStatusVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
MigrateStatusVDS, error = Fatal error during migration, code = 12

For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) from source 
and also from destination hosts.
Thanks


- Original Message -
From: Jason Keltz j...@cse.yorku.ca
To: users users@ovirt.org
Sent: Monday, April 6, 2015 3:47:23 PM
Subject: [ovirt-users] Migration failed, No available host found

Hi.

I have 3 nodes in one cluster and 1 VM running on node2.  I'm trying to 
move the VM to node 1 or node 3, and it fails with the error: Migration 
failed, No available host found

I'm unable to decipher engine.log to determine the cause of the 
problem.  Below is what seems to be the relevant lines from the log.  
Any help would be appreciated.

Thank you!

Jason.

---

2015-04-06 08:31:56,554 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5) 
[3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key: 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM
, sharedLocks= ]
2015-04-06 08:31:56,686 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Running command: 
MigrateVmCommand internal: false. Entities affected :  ID: 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM 
with role type USER,  ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: 
VMAction group EDIT_VM_PROPERTIES with role type USER,  ID: 
8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group 
CREATE_VM with role type USER
2015-04-06 08:31:56,703 INFO 
[org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit]
 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation 
scoring method
2015-04-06 08:31:56,727 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] START, 
MigrateVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, 
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, 
dstHost=192.168.0.36:54321, migrationMethod=ONLINE, 
tunnelMigration=false, migrationDowntime=0), log id: 7555acbd
2015-04-06 08:31:56,728 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] START, 
MigrateBrokerVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, 
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, 
dstHost=192.168.0.36:54321, migrationMethod=ONLINE, 
tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94
2015-04-06 08:31:56,734 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, 
MigrateBrokerVDSCommand, log id: 6d98fb94
2015-04-06 08:31:56,769 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, 
MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd
2015-04-06 08:31:56,778 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496, 
Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom 
Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2, 
Destination: virt3, User: admin@internal).
2015-04-06 08:33:17,633 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up
2015-04-06 08:33:17,633 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list
2015-04-06 08:33:17,661 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2
2015-04-06 08:33:17,666 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] START, 
MigrateStatusVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af), log id: 6c3c9923
2015-04-06 08:33:17,669 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Failed in 
MigrateStatusVDS method
2015-04-06 08:33:17,670 INFO

[ovirt-users] Migration failed, No available host found

2015-04-06 Thread Jason Keltz

Hi.

I have 3 nodes in one cluster and 1 VM running on node2.  I'm trying to 
move the VM to node 1 or node 3, and it fails with the error: Migration 
failed, No available host found


I'm unable to decipher engine.log to determine the cause of the 
problem.  Below is what seems to be the relevant lines from the log.  
Any help would be appreciated.


Thank you!

Jason.

---

2015-04-06 08:31:56,554 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5) 
[3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key: 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM

, sharedLocks= ]
2015-04-06 08:31:56,686 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Running command: 
MigrateVmCommand internal: false. Entities affected :  ID: 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM 
with role type USER,  ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: 
VMAction group EDIT_VM_PROPERTIES with role type USER,  ID: 
8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group 
CREATE_VM with role type USER
2015-04-06 08:31:56,703 INFO 
[org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation 
scoring method
2015-04-06 08:31:56,727 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] START, 
MigrateVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, 
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, 
dstHost=192.168.0.36:54321, migrationMethod=ONLINE, 
tunnelMigration=false, migrationDowntime=0), log id: 7555acbd
2015-04-06 08:31:56,728 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] START, 
MigrateBrokerVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35, 
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc, 
dstHost=192.168.0.36:54321, migrationMethod=ONLINE, 
tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94
2015-04-06 08:31:56,734 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, 
MigrateBrokerVDSCommand, log id: 6d98fb94
2015-04-06 08:31:56,769 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH, 
MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd
2015-04-06 08:31:56,778 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496, 
Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom 
Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2, 
Destination: virt3, User: admin@internal).
2015-04-06 08:33:17,633 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up
2015-04-06 08:33:17,633 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list
2015-04-06 08:33:17,661 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm 
9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2
2015-04-06 08:33:17,666 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] START, 
MigrateStatusVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af), log id: 6c3c9923
2015-04-06 08:33:17,669 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Failed in 
MigrateStatusVDS method
2015-04-06 08:33:17,670 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return 
value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=12, 
mMessage=Fatal error during migration]]
2015-04-06 08:33:17,670 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] HostName = virt2
2015-04-06 08:33:17,670 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command 
MigrateStatusVDSCommand(HostName = virt2, HostId = 
1d1d1fbb-3067-4703-8b51-e0a231d344e6, 
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 

Re: [ovirt-users] Migration failed, No available host found

2015-04-06 Thread Jason Keltz

Hi Artyom,

Here are the vdsm logs from  virt1, virt2 (where the node is running), 
and virt3.

The logs from virt2 look suspicious, but still not sure the problem.

http://goo.gl/GjbWUP

Jason.

On 04/06/2015 09:42 AM, Artyom Lukianov wrote:

Engine try to migrate vm on some available host, but migration failed, so 
engine try another host. From some reason migration failed on all hosts:
(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command
MigrateStatusVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
MigrateStatusVDS, error = Fatal error during migration, code = 12

For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) from source 
and also from destination hosts.
Thanks


- Original Message -
From: Jason Keltz j...@cse.yorku.ca
To: users users@ovirt.org
Sent: Monday, April 6, 2015 3:47:23 PM
Subject: [ovirt-users] Migration failed, No available host found

Hi.

I have 3 nodes in one cluster and 1 VM running on node2.  I'm trying to
move the VM to node 1 or node 3, and it fails with the error: Migration
failed, No available host found

I'm unable to decipher engine.log to determine the cause of the
problem.  Below is what seems to be the relevant lines from the log.
Any help would be appreciated.

Thank you!

Jason.

---

2015-04-06 08:31:56,554 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5)
[3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key:
9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM
, sharedLocks= ]
2015-04-06 08:31:56,686 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] Running command:
MigrateVmCommand internal: false. Entities affected :  ID:
9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM
with role type USER,  ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type:
VMAction group EDIT_VM_PROPERTIES with role type USER,  ID:
8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group
CREATE_VM with role type USER
2015-04-06 08:31:56,703 INFO
[org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit]
(org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation
scoring method
2015-04-06 08:31:56,727 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] START,
MigrateVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35,
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc,
dstHost=192.168.0.36:54321, migrationMethod=ONLINE,
tunnelMigration=false, migrationDowntime=0), log id: 7555acbd
2015-04-06 08:31:56,728 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] START,
MigrateBrokerVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35,
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc,
dstHost=192.168.0.36:54321, migrationMethod=ONLINE,
tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94
2015-04-06 08:31:56,734 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH,
MigrateBrokerVDSCommand, log id: 6d98fb94
2015-04-06 08:31:56,769 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH,
MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd
2015-04-06 08:31:56,778 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496,
Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2,
Destination: virt3, User: admin@internal).
2015-04-06 08:33:17,633 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo
9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up
2015-04-06 08:33:17,633 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM
9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list
2015-04-06 08:33:17,661 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm
9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2
2015-04-06 08:33:17,666 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-38) [71f97a52] START,
MigrateStatusVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af), log id: 6c3c9923
2015-04-06 08:33:17,669 ERROR

Re: [ovirt-users] Migration failed, No available host found

2015-04-06 Thread Jason Keltz

Hi Artyom,

The problems were caused by an issue with MTU on the hosts.  I have 
rectified the issue and can now migrate hosts.


Jason.

On 04/06/2015 10:57 AM, Jason Keltz wrote:

Hi Artyom,

Here are the vdsm logs from  virt1, virt2 (where the node is running), 
and virt3.

The logs from virt2 look suspicious, but still not sure the problem.

http://goo.gl/GjbWUP

Jason.

On 04/06/2015 09:42 AM, Artyom Lukianov wrote:
Engine try to migrate vm on some available host, but migration 
failed, so engine try another host. From some reason migration failed 
on all hosts:

(org.ovirt.thread.pool-8-thread-38) [71f97a52] Command
MigrateStatusVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af) execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
MigrateStatusVDS, error = Fatal error during migration, code = 12

For future investigation we need vdsm logs(/var/log/vdsm/vdsm.log) 
from source and also from destination hosts.

Thanks


- Original Message -
From: Jason Keltz j...@cse.yorku.ca
To: users users@ovirt.org
Sent: Monday, April 6, 2015 3:47:23 PM
Subject: [ovirt-users] Migration failed, No available host found

Hi.

I have 3 nodes in one cluster and 1 VM running on node2.  I'm trying to
move the VM to node 1 or node 3, and it fails with the error: Migration
failed, No available host found

I'm unable to decipher engine.log to determine the cause of the
problem.  Below is what seems to be the relevant lines from the log.
Any help would be appreciated.

Thank you!

Jason.

---

2015-04-06 08:31:56,554 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand] (ajp--127.0.0.1-8702-5)
[3b191496] Lock Acquired to object EngineLock [exclusiveLocks= key:
9de649ca-c9a9-4ba7-bb2c-61c44e2819af value: VM
, sharedLocks= ]
2015-04-06 08:31:56,686 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] Running command:
MigrateVmCommand internal: false. Entities affected :  ID:
9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type: VMAction group MIGRATE_VM
with role type USER,  ID: 9de649ca-c9a9-4ba7-bb2c-61c44e2819af Type:
VMAction group EDIT_VM_PROPERTIES with role type USER,  ID:
8d432949-e03c-4950-a91a-160727f7bdf2 Type: VdsGroupsAction group
CREATE_VM with role type USER
2015-04-06 08:31:56,703 INFO
[org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit] 


(org.ovirt.thread.pool-8-thread-20) [3b191496] Started HA reservation
scoring method
2015-04-06 08:31:56,727 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] START,
MigrateVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35,
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc,
dstHost=192.168.0.36:54321, migrationMethod=ONLINE,
tunnelMigration=false, migrationDowntime=0), log id: 7555acbd
2015-04-06 08:31:56,728 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] START,
MigrateBrokerVDSCommand(HostName = virt2, HostId =
1d1d1fbb-3067-4703-8b51-e0a231d344e6,
vmId=9de649ca-c9a9-4ba7-bb2c-61c44e2819af, srcHost=192.168.0.35,
dstVdsId=3429b1fc-36d5-4078-831c-a5b4370a8bfc,
dstHost=192.168.0.36:54321, migrationMethod=ONLINE,
tunnelMigration=false, migrationDowntime=0), log id: 6d98fb94
2015-04-06 08:31:56,734 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH,
MigrateBrokerVDSCommand, log id: 6d98fb94
2015-04-06 08:31:56,769 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-8-thread-20) [3b191496] FINISH,
MigrateVDSCommand, return: MigratingFrom, log id: 7555acbd
2015-04-06 08:31:56,778 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-20) [3b191496] Correlation ID: 3b191496,
Job ID: 0f8c2d21-201e-454f-9876-dce9a1ca56fd, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: nindigo, Source: virt2,
Destination: virt3, User: admin@internal).
2015-04-06 08:33:17,633 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] VM nindigo
9de649ca-c9a9-4ba7-bb2c-61c44e2819af moved from MigratingFrom -- Up
2015-04-06 08:33:17,633 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] Adding VM
9de649ca-c9a9-4ba7-bb2c-61c44e2819af to re-run list
2015-04-06 08:33:17,661 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-35) [71f97a52] Rerun vm
9de649ca-c9a9-4ba7-bb2c-61c44e2819af. Called from vds virt2
2015-04-06 08:33:17,666 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-38) [71f97a52] START,
MigrateStatusVDSCommand(HostName = virt2, HostId

Re: [ovirt-users] Migration failed due to Error: Fatal error during migration

2014-08-31 Thread Omer Frenkel


- Original Message -
 From: Punit Dambiwal hypu...@gmail.com
 To: Omer Frenkel ofren...@redhat.com
 Cc: Manfred Landauer manfred.landa...@fabasoft.com, users@ovirt.org
 Sent: Friday, August 29, 2014 4:30:38 AM
 Subject: Re: [ovirt-users] Migration failed due to Error: Fatal error during 
 migration
 
 Hi ,
 
 I am also facing the same issue...
 
 here is the engine logs :-
 
 2014-08-29 09:27:45,432 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) FINISH, MigrateStatusVDSCommand, log
 id: 1f3e4161
 2014-08-29 09:27:45,439 INFO
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (org.ovirt.thread.pool-6-thread-24) Correlation ID: 84ff2f4, Job ID:
 8bc0b78a-c600-4f8d-98f8-66a46c66abe0, Call Stack: null, Custom Event ID:
 -1, Message: Migration failed due to Error: Fatal error during migration.

please send vdsm and libvirt log from the src host for this vm

 Trying to migrate to another Host (VM: bc16391ac105b7e68cccb47803906d0b,
 Source: compute4, Destination: compute3).
 2014-08-29 09:27:45,536 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand]
 (org.ovirt.thread.pool-6-thread-24) Running command: MigrateVmCommand
 internal: false. Entities affected :  ID:
 6134b272-cd7f-43c1-a1b1-eaefe69c6b75 Type: VM

i see this vm did migrate successfully, right?

 2014-08-29 09:27:45,610 INFO
  
 [org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit]
 (org.ovirt.thread.pool-6-thread-24) Started HA reservation scoring method
 2014-08-29 09:27:45,666 INFO
  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) START, MigrateVDSCommand(HostName =
 compute4, HostId = 3a7a4504-1434-4fd2-ac00-e8d12c043b37,
 vmId=6134b272-cd7f-43c1-a1b1-eaefe69c6b75, srcHost=compute4.3linux.com,
 dstVdsId=bcd2bd85-c501-4be4-9730-a8662462cab5, dstHost=
 compute2.3linux.com:54321, migrationMethod=ONLINE, tunnelMigration=false,
 migrationDowntime=0), log id: 640b0ccd
 2014-08-29 09:27:45,667 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) VdsBroker::migrate::Entered
 (vm_guid=6134b272-cd7f-43c1-a1b1-eaefe69c6b75, srcHost=compute4.3linux.com,
 dstHost=compute2.3linux.com:54321,  method=online
 2014-08-29 09:27:45,684 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) START, MigrateBrokerVDSCommand(HostName
 = compute4, HostId = 3a7a4504-1434-4fd2-ac00-e8d12c043b37,
 vmId=6134b272-cd7f-43c1-a1b1-eaefe69c6b75, srcHost=compute4.3linux.com,
 dstVdsId=bcd2bd85-c501-4be4-9730-a8662462cab5, dstHost=
 compute2.3linux.com:54321, migrationMethod=ONLINE, tunnelMigration=false,
 migrationDowntime=0), log id: 1b3d3891
 2014-08-29 09:27:45,702 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) FINISH, MigrateBrokerVDSCommand, log
 id: 1b3d3891
 2014-08-29 09:27:45,707 INFO
  [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
 (org.ovirt.thread.pool-6-thread-24) FINISH, MigrateVDSCommand, return:
 MigratingFrom, log id: 640b0ccd
 2014-08-29 09:27:45,711 INFO
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (org.ovirt.thread.pool-6-thread-24) Correlation ID: 84ff2f4, Job ID:
 8bc0b78a-c600-4f8d-98f8-66a46c66abe0, Call Stack: null, Custom Event ID:
 -1, Message: Migration started (VM: bc16391ac105b7e68cccb47803906d0b,
 Source: compute4, Destination: compute2, User: admin).
 2014-08-29 09:27:47,143 INFO
  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (DefaultQuartzScheduler_Worker-28) START,
 GlusterVolumesListVDSCommand(HostName = compute4, HostId =
 3a7a4504-1434-4fd2-ac00-e8d12c043b37), log id: 9e34612
 2014-08-29 09:27:47,300 INFO
  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (DefaultQuartzScheduler_Worker-28) FINISH, GlusterVolumesListVDSCommand,
 return:
 {e6117925-79b1-417b-9d07-cfc31f68bc51=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@b42f1b41},
 log id: 9e34612
 2014-08-29 09:27:48,306 INFO
  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-51) RefreshVmList vm id
 6134b272-cd7f-43c1-a1b1-eaefe69c6b75 is migrating to vds compute2 ignoring
 it in the refresh until migration is done
 2014-08-29 09:27:51,349 INFO
  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-52) RefreshVmList vm id
 6134b272-cd7f-43c1-a1b1-eaefe69c6b75 is migrating to vds compute2 ignoring
 it in the refresh until migration is done
 2014-08-29 09:27:52,470 INFO
  [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (DefaultQuartzScheduler_Worker-49) START,
 GlusterVolumesListVDSCommand(HostName = compute4, HostId =
 3a7a4504-1434-4fd2-ac00-e8d12c043b37), log id: 7cbb4cd
 2014-08-29 09:27:52,624 INFO

Re: [ovirt-users] Migration failed due to Error: Fatal error during migration

2014-08-28 Thread Omer Frenkel


- Original Message -
 From: Manfred Landauer manfred.landa...@fabasoft.com
 To: users@ovirt.org
 Sent: Thursday, August 14, 2014 6:29:21 PM
 Subject: [ovirt-users] Migration failed due to Error: Fatal error during  
 migration
 
 
 
 Hi all
 
 
 
 When we try to migrate a VM on oVirt “Engine Version: 3.4.3-1.el6 ” form host
 A to host B we’ll get this Errormessage: “Migration failed due to Error:
 Fatal error during migration”.
 
 
 
 It looks like, this occurs only when thin provisioned HDD’s attached to the
 VM. VM’s with preallocated HDD’s attached, migrate without a problem.
 
 
 
 Hope someone can help us to solve this issue.
 
 

it looks more like a network error:
Thread-3810747::ERROR::2014-08-14 16:48:45,471::vm::337::vm.Vm::(run) 
vmId=`494f5edc-7edd-4300-a675-f0a8883265e4`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 323, in run
self._startUnderlyingMigration()
  File /usr/share/vdsm/vm.py, line 400, in _startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/vm.py, line 838, in f
ret = attr(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 76, 
in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 1178, in 
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
libvirtError: Unable to read from monitor: Connection reset by peer

could you attach the libvirt log?

 
 Best regards
 
 Manfred
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed due to Error: Fatal error during migration

2014-08-28 Thread Punit Dambiwal
=true, vmId=6134b272-cd7f-43c1-a1b1-eaefe69c6b75,
acpiEnable=true, cpuShares=2, custom={}, spiceSslCipherSuite=DEFAULT,
memSize=4096, smp=1, displayPort=5900, emulatedMachine=rhel6.5.0,
vmType=kvm, status=Up, memGuaranteedSize=512, display=vnc, pid=5718,
smartcardEnable=false, tabletEnable=true, smpCoresPerSocket=1,
spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,
maxVCpus=160, clientIp=, devices=[Ljava.lang.Object;@53ee8569,
vmName=bc16391ac105b7e68cccb47803906d0b, cpuType=Westmere}], log id:
7bd8658c
2014-08-29 09:28:03,548 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-55) Correlation ID: 84ff2f4, Job ID:
8bc0b78a-c600-4f8d-98f8-66a46c66abe0, Call Stack: null, Custom Event ID:
-1, Message: Migration completed (VM: bc16391ac105b7e68cccb47803906d0b,
Source: compute4, Destination: compute2, Duration: 17 sec).
2014-08-29 09:28:03,551 INFO  [org.ovirt.engine.core.bll.LoginUserCommand]
(ajp--127.0.0.1-8702-1) Running command: LoginUserCommand internal: false.
2014-08-29 09:28:03,553 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand]
(DefaultQuartzScheduler_Worker-55) Lock freed to object EngineLock
[exclusiveLocks= key: 6134b272-cd7f-43c1-a1b1-eaefe69c6b75 value: VM
, sharedLocks= ]
2014-08-29 09:28:03,571 INFO
 [org.ovirt.engine.core.vdsbroker.FailedToRunVmVDSCommand]
(org.ovirt.thread.pool-6-thread-10) START, FailedToRunVmVDSCommand(HostName
= compute3, HostId = fb492af0-3489-4c15-bb9d-6e3829cb536c), log id: 52897093
2014-08-29 09:28:03,572 INFO
 [org.ovirt.engine.core.vdsbroker.FailedToRunVmVDSCommand]
(org.ovirt.thread.pool-6-thread-10) FINISH, FailedToRunVmVDSCommand, log
id: 52897093
2014-08-29 09:28:03,651 INFO  [org.ovirt.engine.core.bll.LogoutUserCommand]
(ajp--127.0.0.1-8702-1) [5c34822e] Running command: LogoutUserCommand
internal: false.
2014-08-29 09:28:03,656 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-1) [5c34822e] Correlation ID: 5c34822e, Call Stack:
null, Custom Event ID: -1, Message: User admin logged out.
2014-08-29 09:28:03,716 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-22) START, DestroyVDSCommand(HostName =
compute4, HostId = 3a7a4504-1434-4fd2-ac00-e8d12c043b37,
vmId=6134b272-cd7f-43c1-a1b1-eaefe69c6b75, force=false, secondsToWait=0,
gracefully=false), log id: 7484552f
2014-08-29 09:28:03,769 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-22) FINISH, DestroyVDSCommand, log id:
7484552f
2014-08-29 09:28:03,770 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-22) RefreshVmList vm id
6134b272-cd7f-43c1-a1b1-eaefe69c6b75 status = Down on vds compute4 ignoring
it in the refresh until migration is done
2014-08-29 09:28:08,470 INFO
 [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-71) START,
GlusterVolumesListVDSCommand(HostName = compute4, HostId =
3a7a4504-1434-4fd2-ac00-e8d12c043b37), log id: 6293b194
2014-08-29 09:28:08,564 INFO  [org.ovirt.engine.core.bll.LoginUserCommand]
(ajp--127.0.0.1-8702-6) Running command: LoginUserCommand internal: false.
2014-08-29 09:28:08,571 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-6) Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: User admin logged in.



On Thu, Aug 28, 2014 at 7:48 PM, Omer Frenkel ofren...@redhat.com wrote:



 - Original Message -
  From: Manfred Landauer manfred.landa...@fabasoft.com
  To: users@ovirt.org
  Sent: Thursday, August 14, 2014 6:29:21 PM
  Subject: [ovirt-users] Migration failed due to Error: Fatal error
 during  migration
 
 
 
  Hi all
 
 
 
  When we try to migrate a VM on oVirt “Engine Version: 3.4.3-1.el6 ” form
 host
  A to host B we’ll get this Errormessage: “Migration failed due to Error:
  Fatal error during migration”.
 
 
 
  It looks like, this occurs only when thin provisioned HDD’s attached to
 the
  VM. VM’s with preallocated HDD’s attached, migrate without a problem.
 
 
 
  Hope someone can help us to solve this issue.
 
 

 it looks more like a network error:
 Thread-3810747::ERROR::2014-08-14 16:48:45,471::vm::337::vm.Vm::(run)
 vmId=`494f5edc-7edd-4300-a675-f0a8883265e4`::Failed to migrate
 Traceback (most recent call last):
   File /usr/share/vdsm/vm.py, line 323, in run
 self._startUnderlyingMigration()
   File /usr/share/vdsm/vm.py, line 400, in _startUnderlyingMigration
 None, maxBandwidth)
   File /usr/share/vdsm/vm.py, line 838, in f
 ret = attr(*args, **kwargs)
   File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py,
 line 76, in wrapper
 ret = f(*args, **kwargs)
   File /usr/lib64/python2.6/site-packages/libvirt.py, line 1178, in
 migrateToURI2
 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed',
 dom=self

[ovirt-users] Migration failed, No available host found

2014-08-25 Thread PaulCheung
Dear ALL,
I have 3 servers,   KVM01, KVM02, KVM03I want to migration some vms to KVM02 , 
there show a message:


 Migration failed, No available host found (VM: AL1-Paul, Source: KVM03).




But I can migration from kvm01 to kvm03, or kvm03 to kvm01,  but not kvm02.




I check the firewall, they are all the same!Can somebody help me!




  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration failed, No available host found

2014-08-25 Thread Omer Frenkel


- Original Message -
 From: PaulCheung eq2...@msn.com
 To: users@ovirt.org
 Sent: Monday, August 25, 2014 2:03:22 PM
 Subject: [ovirt-users] Migration failed, No available host found
 
 Dear ALL,
 
 I have 3 servers, KVM01, KVM02, KVM03
 I want to migration some vms to KVM02 , there show a message:
 
 Migration failed, No available host found (VM: AL1-Paul, Source: KVM03).
 
 
 
 
 But I can migration from kvm01 to kvm03, or kvm03 to kvm01, but not kvm02.
 
 
 
 
 I check the firewall, they are all the same! Can somebody help me!
 
 

are you sure kvm02 has enough resources (cpu/mem) to host the new vm?
can you please attach engine.log


 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Migration failed due to Error: Fatal error during migration

2014-08-14 Thread Landauer, Manfred
Hi all

When we try to migrate a VM on oVirt Engine Version: 3.4.3-1.el6 form host A 
to host B we'll get this Errormessage: Migration failed due to Error: Fatal 
error during migration.

It looks like, this occurs only when thin provisioned HDD's  attached to the 
VM. VM's with preallocated HDD's attached, migrate without a problem.

Hope someone can help us to solve this issue.

Best regards
Manfred



vdsm.log
Description: vdsm.log
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-02-27 Thread Meital Bourvine
Hi Koen, 

Can you please attach the relevant vdsm logs? 

- Original Message -

 From: Koen Vanoppen vanoppen.k...@gmail.com
 To: users@ovirt.org
 Sent: Thursday, February 27, 2014 9:38:46 AM
 Subject: [Users] Migration Failed

 Dear all,

 I added a new host to our ovirt. Everything went good, exept in the beginnen
 there was a problem with the firmware of the FibreCard but that is solved
 (maybe relevant to the issue coming up ;-) ), host is green en up now. But
 when I tried to migrate a machine for testing purpose to see if everythin
 was ok, I get the following error in the engine.log and the migration fails:

 2014-02-27 08:33:08,082 INFO [org.ovirt.engine.core.bll.MigrateVmCommand]
 (pool-6-thread-49) [f1a68d8] Running command: MigrateVmCommand internal:
 false. Entities affected : ID: 3444fc9d-0395-4cbb-9a11-28a42802560c Type: VM
 2014-02-27 08:33:08,362 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49)
 [f1a68d8] START, MigrateVDSCommand(HostName = soyuz, HostId =
 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a,
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost=
 soyuz.brusselsairport.aero , dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d,
 dstHost= buran.brusselsairport.aero:54321 , migrationMethod=ONLINE,
 tunnelMigration=false), log id: 50cd7284
 2014-02-27 08:33:08,371 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (pool-6-thread-49) [f1a68d8] VdsBroker::migrate::Entered
 (vm_guid=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost=
 soyuz.brusselsairport.aero , dstHost= buran.brusselsairport.aero:54321 ,
 method=online
 2014-02-27 08:33:08,405 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (pool-6-thread-49) [f1a68d8] START, MigrateBrokerVDSCommand(HostName =
 soyuz, HostId = 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a,
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost=
 soyuz.brusselsairport.aero , dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d,
 dstHost= buran.brusselsairport.aero:54321 , migrationMethod=ONLINE,
 tunnelMigration=false), log id: 20806b79
 2014-02-27 08:33:08,441 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (pool-6-thread-49) [f1a68d8] FINISH, MigrateBrokerVDSCommand, log id:
 20806b79
 2014-02-27 08:33:08,451 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49)
 [f1a68d8] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 50cd7284
 2014-02-27 08:33:08,491 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (pool-6-thread-49) [f1a68d8] Correlation ID: f1a68d8, Job ID:
 c3642418-3f05-41eb-8b1d-07fe04867742, Call Stack: null, Custom Event ID: -1,
 Message: Migration started (VM: ADW-DevSplunk, Source: soyuz, Destination:
 buran, User: admin@internal).
 2014-02-27 08:33:20,036 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-82) VM ADW-DevSplunk
 3444fc9d-0395-4cbb-9a11-28a42802560c moved from MigratingFrom -- Up
 2014-02-27 08:33:20,042 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-82) Adding VM
 3444fc9d-0395-4cbb-9a11-28a42802560c to re-run list
 2014-02-27 08:33:20,051 ERROR
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-82) Rerun vm
 3444fc9d-0395-4cbb-9a11-28a42802560c. Called from vds soyuz
 2014-02-27 08:33:20,107 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) START, MigrateStatusVDSCommand(HostName = soyuz, HostId =
 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a,
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c), log id: 75ac0a46
 2014-02-27 08:33:20,124 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) Failed in MigrateStatusVDS method
 2014-02-27 08:33:20,130 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) Error code noConPeer and error message
 VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error =
 Could not connect to peer VDS
 2014-02-27 08:33:20,136 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) Command
 org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return
 value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=10, mMessage=Could
 not connect to peer VDS]]
 2014-02-27 08:33:20,139 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) HostName = soyuz
 2014-02-27 08:33:20,145 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) Command MigrateStatusVDS execution failed. Exception:
 VDSErrorException: VDSGenericException: VDSErrorException: Failed to
 MigrateStatusVDS, error = Could not connect to peer VDS
 2014-02-27 08:33:20,148 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) FINISH, MigrateStatusVDSCommand, log id: 75ac0a46
 2014-02

Re: [Users] Migration Failed

2014-02-27 Thread Gadi Ickowicz
Hi,

Unfortunately it seems the vdsm logs cycled - these vdsm logs do not match the 
times for the engine log snippet you pasted - they start at around 10:00 AM and 
the engine points to 8:33...

Gadi Ickowicz

- Original Message -
From: Koen Vanoppen vanoppen.k...@gmail.com
To: Meital Bourvine mbour...@redhat.com, users@ovirt.org
Sent: Thursday, February 27, 2014 11:04:11 AM
Subject: Re: [Users] Migration Failed

In attachment... 
Thanx! 


2014-02-27 9:21 GMT+01:00 Meital Bourvine  mbour...@redhat.com  : 



Hi Koen, 

Can you please attach the relevant vdsm logs? 





From: Koen Vanoppen  vanoppen.k...@gmail.com  
To: users@ovirt.org 
Sent: Thursday, February 27, 2014 9:38:46 AM 
Subject: [Users] Migration Failed 


Dear all, 

I added a new host to our ovirt. Everything went good, exept in the beginnen 
there was a problem with the firmware of the FibreCard but that is solved 
(maybe relevant to the issue coming up ;-) ), host is green en up now. But when 
I tried to migrate a machine for testing purpose to see if everythin was ok, I 
get the following error in the engine.log and the migration fails: 

2014-02-27 08:33:08,082 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] 
(pool-6-thread-49) [f1a68d8] Running command: MigrateVmCommand internal: false. 
Entities affected : ID: 3444fc9d-0395-4cbb-9a11-28a42802560c Type: VM 
2014-02-27 08:33:08,362 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49) 
[f1a68d8] START, MigrateVDSCommand(HostName = soyuz, HostId = 
6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= soyuz.brusselsairport.aero 
, dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d, dstHost= 
buran.brusselsairport.aero:54321 , migrationMethod=ONLINE, 
tunnelMigration=false), log id: 50cd7284 
2014-02-27 08:33:08,371 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(pool-6-thread-49) [f1a68d8] VdsBroker::migrate::Entered 
(vm_guid=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= 
soyuz.brusselsairport.aero , dstHost= buran.brusselsairport.aero:54321 , 
method=online 
2014-02-27 08:33:08,405 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(pool-6-thread-49) [f1a68d8] START, MigrateBrokerVDSCommand(HostName = soyuz, 
HostId = 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= soyuz.brusselsairport.aero 
, dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d, dstHost= 
buran.brusselsairport.aero:54321 , migrationMethod=ONLINE, 
tunnelMigration=false), log id: 20806b79 
2014-02-27 08:33:08,441 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(pool-6-thread-49) [f1a68d8] FINISH, MigrateBrokerVDSCommand, log id: 20806b79 
2014-02-27 08:33:08,451 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49) 
[f1a68d8] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 50cd7284 
2014-02-27 08:33:08,491 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-6-thread-49) [f1a68d8] Correlation ID: f1a68d8, Job ID: 
c3642418-3f05-41eb-8b1d-07fe04867742, Call Stack: null, Custom Event ID: -1, 
Message: Migration started (VM: ADW-DevSplunk, Source: soyuz, Destination: 
buran, User: admin@internal). 
2014-02-27 08:33:20,036 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-82) VM ADW-DevSplunk 
3444fc9d-0395-4cbb-9a11-28a42802560c moved from MigratingFrom -- Up 
2014-02-27 08:33:20,042 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-82) Adding VM 
3444fc9d-0395-4cbb-9a11-28a42802560c to re-run list 
2014-02-27 08:33:20,051 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-82) Rerun vm 
3444fc9d-0395-4cbb-9a11-28a42802560c. Called from vds soyuz 
2014-02-27 08:33:20,107 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-6-thread-50) START, MigrateStatusVDSCommand(HostName = soyuz, HostId = 
6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
vmId=3444fc9d-0395-4cbb-9a11-28a42802560c), log id: 75ac0a46 
2014-02-27 08:33:20,124 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-6-thread-50) Failed in MigrateStatusVDS method 
2014-02-27 08:33:20,130 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-6-thread-50) Error code noConPeer and error message VDSGenericException: 
VDSErrorException: Failed to MigrateStatusVDS, error = Could not connect to 
peer VDS 
2014-02-27 08:33:20,136 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-6-thread-50) Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value 
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=10, mMessage=Could 
not connect to peer VDS]] 
2014-02-27 08:33:20,139 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-6-thread-50

Re: [Users] Migration Failed

2014-02-27 Thread Michal Skrivanek

On Feb 27, 2014, at 10:09 , Gadi Ickowicz gicko...@redhat.com wrote:

 Hi,
 
 Unfortunately it seems the vdsm logs cycled - these vdsm logs do not match 
 the times for the engine log snippet you pasted - they start at around 10:00 
 AM and the engine points to 8:33…

seeing error = Could not connect to peer VDS points me to a possible direct 
network connectivity issue between the those two hosts. Src vdsm needs to be 
able to talk to dst vdsm

Thanks,
michal

 
 Gadi Ickowicz
 
 - Original Message -
 From: Koen Vanoppen vanoppen.k...@gmail.com
 To: Meital Bourvine mbour...@redhat.com, users@ovirt.org
 Sent: Thursday, February 27, 2014 11:04:11 AM
 Subject: Re: [Users] Migration Failed
 
 In attachment... 
 Thanx! 
 
 
 2014-02-27 9:21 GMT+01:00 Meital Bourvine  mbour...@redhat.com  : 
 
 
 
 Hi Koen, 
 
 Can you please attach the relevant vdsm logs? 
 
 
 
 
 
 From: Koen Vanoppen  vanoppen.k...@gmail.com  
 To: users@ovirt.org 
 Sent: Thursday, February 27, 2014 9:38:46 AM 
 Subject: [Users] Migration Failed 
 
 
 Dear all, 
 
 I added a new host to our ovirt. Everything went good, exept in the beginnen 
 there was a problem with the firmware of the FibreCard but that is solved 
 (maybe relevant to the issue coming up ;-) ), host is green en up now. But 
 when I tried to migrate a machine for testing purpose to see if everythin was 
 ok, I get the following error in the engine.log and the migration fails: 
 
 2014-02-27 08:33:08,082 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] 
 (pool-6-thread-49) [f1a68d8] Running command: MigrateVmCommand internal: 
 false. Entities affected : ID: 3444fc9d-0395-4cbb-9a11-28a42802560c Type: VM 
 2014-02-27 08:33:08,362 INFO 
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49) 
 [f1a68d8] START, MigrateVDSCommand(HostName = soyuz, HostId = 
 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= 
 soyuz.brusselsairport.aero , dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d, 
 dstHost= buran.brusselsairport.aero:54321 , migrationMethod=ONLINE, 
 tunnelMigration=false), log id: 50cd7284 
 2014-02-27 08:33:08,371 INFO 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
 (pool-6-thread-49) [f1a68d8] VdsBroker::migrate::Entered 
 (vm_guid=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= 
 soyuz.brusselsairport.aero , dstHost= buran.brusselsairport.aero:54321 , 
 method=online 
 2014-02-27 08:33:08,405 INFO 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
 (pool-6-thread-49) [f1a68d8] START, MigrateBrokerVDSCommand(HostName = soyuz, 
 HostId = 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= 
 soyuz.brusselsairport.aero , dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d, 
 dstHost= buran.brusselsairport.aero:54321 , migrationMethod=ONLINE, 
 tunnelMigration=false), log id: 20806b79 
 2014-02-27 08:33:08,441 INFO 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
 (pool-6-thread-49) [f1a68d8] FINISH, MigrateBrokerVDSCommand, log id: 
 20806b79 
 2014-02-27 08:33:08,451 INFO 
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49) 
 [f1a68d8] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 50cd7284 
 2014-02-27 08:33:08,491 INFO 
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
 (pool-6-thread-49) [f1a68d8] Correlation ID: f1a68d8, Job ID: 
 c3642418-3f05-41eb-8b1d-07fe04867742, Call Stack: null, Custom Event ID: -1, 
 Message: Migration started (VM: ADW-DevSplunk, Source: soyuz, Destination: 
 buran, User: admin@internal). 
 2014-02-27 08:33:20,036 INFO 
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
 (DefaultQuartzScheduler_Worker-82) VM ADW-DevSplunk 
 3444fc9d-0395-4cbb-9a11-28a42802560c moved from MigratingFrom -- Up 
 2014-02-27 08:33:20,042 INFO 
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
 (DefaultQuartzScheduler_Worker-82) Adding VM 
 3444fc9d-0395-4cbb-9a11-28a42802560c to re-run list 
 2014-02-27 08:33:20,051 ERROR 
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
 (DefaultQuartzScheduler_Worker-82) Rerun vm 
 3444fc9d-0395-4cbb-9a11-28a42802560c. Called from vds soyuz 
 2014-02-27 08:33:20,107 INFO 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
 (pool-6-thread-50) START, MigrateStatusVDSCommand(HostName = soyuz, HostId = 
 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c), log id: 75ac0a46 
 2014-02-27 08:33:20,124 ERROR 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
 (pool-6-thread-50) Failed in MigrateStatusVDS method 
 2014-02-27 08:33:20,130 ERROR 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
 (pool-6-thread-50) Error code noConPeer and error message 
 VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = 
 Could not connect to peer VDS 
 2014-02-27 08:33:20,136 INFO

Re: [Users] Migration Failed

2014-02-27 Thread Dan Kenigsberg
On Thu, Feb 27, 2014 at 10:42:54AM +0100, Koen Vanoppen wrote:
 Sorry...
 I added the correct one now

Still, I fail to find the relevant ::ERROR:: line about migration.
But as Michal mentioned, Could not connect to peer VDS means that
source vdsm failed to contact the destination one.

This can stem from physical or logical network problem.
Can you ping from source to dest?
Can what happens when you log into source host and run

  vdsClient -s fqdn-of-destination-host list

? do you get any response? What happens if you disable your firewall?

Regards,
Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Migration Failed

2014-02-26 Thread Koen Vanoppen
Dear all,

I added a new host to our ovirt. Everything went good, exept in the
beginnen there was a problem with the firmware of the FibreCard but that is
solved (maybe relevant to the issue coming up ;-) ), host is green en up
now. But when I tried to migrate a machine for testing purpose to see if
everythin was ok, I get the following error in the engine.log and the
migration fails:

2014-02-27 08:33:08,082 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand]
(pool-6-thread-49) [f1a68d8] Running command: MigrateVmCommand internal:
false. Entities affected :  ID: 3444fc9d-0395-4cbb-9a11-28a42802560c Type:
VM
2014-02-27 08:33:08,362 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49)
[f1a68d8] START, MigrateVDSCommand(HostName = soyuz, HostId =
6dfa2f9c-85c6-4fb3-b65f-c84620115a1a,
vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost=
soyuz.brusselsairport.aero, dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d,
dstHost=buran.brusselsairport.aero:54321, migrationMethod=ONLINE,
tunnelMigration=false), log id: 50cd7284
2014-02-27 08:33:08,371 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-6-thread-49) [f1a68d8] VdsBroker::migrate::Entered
(vm_guid=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost=
soyuz.brusselsairport.aero, dstHost=buran.brusselsairport.aero:54321,
method=online
2014-02-27 08:33:08,405 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-6-thread-49) [f1a68d8] START, MigrateBrokerVDSCommand(HostName =
soyuz, HostId = 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a,
vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost=
soyuz.brusselsairport.aero, dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d,
dstHost=buran.brusselsairport.aero:54321, migrationMethod=ONLINE,
tunnelMigration=false), log id: 20806b79
2014-02-27 08:33:08,441 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(pool-6-thread-49) [f1a68d8] FINISH, MigrateBrokerVDSCommand, log id:
20806b79
2014-02-27 08:33:08,451 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49)
[f1a68d8] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 50cd7284
2014-02-27 08:33:08,491 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-6-thread-49) [f1a68d8] Correlation ID: f1a68d8, Job ID:
c3642418-3f05-41eb-8b1d-07fe04867742, Call Stack: null, Custom Event ID:
-1, Message: Migration started (VM: ADW-DevSplunk, Source: soyuz,
Destination: buran, User: admin@internal).
2014-02-27 08:33:20,036 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-82) VM ADW-DevSplunk
3444fc9d-0395-4cbb-9a11-28a42802560c moved from MigratingFrom -- Up
2014-02-27 08:33:20,042 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-82) Adding VM
3444fc9d-0395-4cbb-9a11-28a42802560c to re-run list
2014-02-27 08:33:20,051 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-82) Rerun vm
3444fc9d-0395-4cbb-9a11-28a42802560c. Called from vds soyuz
2014-02-27 08:33:20,107 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-6-thread-50) START, MigrateStatusVDSCommand(HostName = soyuz, HostId
= 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a,
vmId=3444fc9d-0395-4cbb-9a11-28a42802560c), log id: 75ac0a46
2014-02-27 08:33:20,124 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-6-thread-50) Failed in MigrateStatusVDS method
2014-02-27 08:33:20,130 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-6-thread-50) Error code noConPeer and error message
VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error =
Could not connect to peer VDS
2014-02-27 08:33:20,136 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-6-thread-50) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return
value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=10,
mMessage=Could not connect to peer VDS]]
2014-02-27 08:33:20,139 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-6-thread-50) HostName = soyuz
2014-02-27 08:33:20,145 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-6-thread-50) Command MigrateStatusVDS execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
MigrateStatusVDS, error = Could not connect to peer VDS
2014-02-27 08:33:20,148 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-6-thread-50) FINISH, MigrateStatusVDSCommand, log id: 75ac0a46
2014-02-27 08:33:20,154 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-6-thread-50) Correlation ID: f1a68d8, Job ID:
c3642418-3f05-41eb-8b1d-07fe04867742, Call Stack: null, Custom Event ID:
-1, Message: Migration failed due to Error: Could not connect to peer host.
Trying to migrate to another Host (VM: 

Re: [Users] Migration failed (previous migrations succeded)

2014-02-01 Thread Itamar Heim

On 01/31/2014 09:30 AM, Sven Kieske wrote:

Hi,

is there any documentation regarding all
allowed settings in the vdsm.conf?

I didn't find anything related in the rhev docs


that's a question for vdsm mailing list - cc-ing...



Am 30.01.2014 21:43, schrieb Itamar Heim:

On 01/30/2014 10:37 PM, Markus Stockhausen wrote:

Von: Itamar Heim [ih...@redhat.com]
Gesendet: Donnerstag, 30. Januar 2014 21:25
An: Markus Stockhausen; ovirt-users
Betreff: Re: [Users] Migration failed (previous migrations succeded)


Now I' getting serious problems. During the migration the VM was
doing a rather slow download at 1,5 MB/s. So the memory changed
by 15 MB per 10 seconds. No wonder that a check every 10 seconds
was not able to see any progress. Im scared what will happen if I
want to migrate a medium loaded system runing a database.

Any tip for a parametrization?

Markus



what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
migrate on 1Gb without congesting it.
you could raise that if you have 10GB, or raise the bandwidth cap and
reduce max number of concurrent VMs, etc.


My migration network is IPoIB 10GBit. During our tests only one VM
was migrated.  Bandwidth cap or number of concurrent VMs has not
been changed after default install.

Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?


probably


And what settings do you suggest?


well, to begin with, 300MB/sec on 10GE (still allowing concurrent
migrations)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-01-31 Thread Sven Kieske
Hi,

is there any documentation regarding all
allowed settings in the vdsm.conf?

I didn't find anything related in the rhev docs

Am 30.01.2014 21:43, schrieb Itamar Heim:
 On 01/30/2014 10:37 PM, Markus Stockhausen wrote:
 Von: Itamar Heim [ih...@redhat.com]
 Gesendet: Donnerstag, 30. Januar 2014 21:25
 An: Markus Stockhausen; ovirt-users
 Betreff: Re: [Users] Migration failed (previous migrations succeded)


 Now I' getting serious problems. During the migration the VM was
 doing a rather slow download at 1,5 MB/s. So the memory changed
 by 15 MB per 10 seconds. No wonder that a check every 10 seconds
 was not able to see any progress. Im scared what will happen if I
 want to migrate a medium loaded system runing a database.

 Any tip for a parametrization?

 Markus


 what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
 migrate on 1Gb without congesting it.
 you could raise that if you have 10GB, or raise the bandwidth cap and
 reduce max number of concurrent VMs, etc.

 My migration network is IPoIB 10GBit. During our tests only one VM
 was migrated.  Bandwidth cap or number of concurrent VMs has not
 been changed after default install.

 Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?
 
 probably
 
 And what settings do you suggest?
 
 well, to begin with, 300MB/sec on 10GE (still allowing concurrent
 migrations)
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-01-30 Thread Markus Stockhausen
 Von: users-boun...@ovirt.org [users-boun...@ovirt.org]quot; im Auftrag von 
 quot;Markus Stockhausen [stockhau...@collogia.de]
 Gesendet: Donnerstag, 30. Januar 2014 18:05
 An: ovirt-users
 Betreff: [Users] Migration failed (previous migrations succeded)
 
 Hello,
 
 we did some migration tests this day and all of a sudden the migration
 failed. That particular VM was moved around several times that day without
 any problems. During the migration the VM was running a download.

Found the reason. The memory was changing faster than the copy process
worked. The logs show:

Thread-289929::WARNING::2014-01-30 16:14:45,559::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(19MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...
Thread-289929::WARNING::2014-01-30 16:14:55,561::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(24MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...
Thread-289929::WARNING::2014-01-30 16:15:05,563::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(20MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...

Now I' getting serious problems. During the migration the VM was
doing a rather slow download at 1,5 MB/s. So the memory changed 
by 15 MB per 10 seconds. No wonder that a check every 10 seconds 
was not able to see any progress. Im scared what will happen if I
want to migrate a medium loaded system runing a database.

Any tip for a parametrization?

Markus


Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-01-30 Thread Itamar Heim

On 01/30/2014 09:22 PM, Markus Stockhausen wrote:

Von: users-boun...@ovirt.org [users-boun...@ovirt.org]quot; im Auftrag von 
quot;Markus Stockhausen [stockhau...@collogia.de]
Gesendet: Donnerstag, 30. Januar 2014 18:05
An: ovirt-users
Betreff: [Users] Migration failed (previous migrations succeded)

Hello,

we did some migration tests this day and all of a sudden the migration
failed. That particular VM was moved around several times that day without
any problems. During the migration the VM was running a download.


Found the reason. The memory was changing faster than the copy process
worked. The logs show:

Thread-289929::WARNING::2014-01-30 16:14:45,559::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(19MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...
Thread-289929::WARNING::2014-01-30 16:14:55,561::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(24MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...
Thread-289929::WARNING::2014-01-30 16:15:05,563::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(20MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...

Now I' getting serious problems. During the migration the VM was
doing a rather slow download at 1,5 MB/s. So the memory changed
by 15 MB per 10 seconds. No wonder that a check every 10 seconds
was not able to see any progress. Im scared what will happen if I
want to migrate a medium loaded system runing a database.

Any tip for a parametrization?

Markus



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to 
migrate on 1Gb without congesting it.
you could raise that if you have 10GB, or raise the bandwidth cap and 
reduce max number of concurrent VMs, etc.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-01-30 Thread Markus Stockhausen
  Von: Itamar Heim [ih...@redhat.com]
  Gesendet: Donnerstag, 30. Januar 2014 21:25
  An: Markus Stockhausen; ovirt-users
  Betreff: Re: [Users] Migration failed (previous migrations succeded)
  
 
  Now I' getting serious problems. During the migration the VM was
  doing a rather slow download at 1,5 MB/s. So the memory changed
  by 15 MB per 10 seconds. No wonder that a check every 10 seconds
  was not able to see any progress. Im scared what will happen if I
  want to migrate a medium loaded system runing a database.
 
  Any tip for a parametrization?
 
  Markus
 
 
 what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
 migrate on 1Gb without congesting it.
 you could raise that if you have 10GB, or raise the bandwidth cap and
 reduce max number of concurrent VMs, etc.

My migration network is IPoIB 10GBit. During our tests only one VM 
was migrated.  Bandwidth cap or number of concurrent VMs has not
been changed after default install. 

Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?
And what settings do you suggest?

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-01-30 Thread Itamar Heim

On 01/30/2014 10:37 PM, Markus Stockhausen wrote:

Von: Itamar Heim [ih...@redhat.com]
Gesendet: Donnerstag, 30. Januar 2014 21:25
An: Markus Stockhausen; ovirt-users
Betreff: Re: [Users] Migration failed (previous migrations succeded)


Now I' getting serious problems. During the migration the VM was
doing a rather slow download at 1,5 MB/s. So the memory changed
by 15 MB per 10 seconds. No wonder that a check every 10 seconds
was not able to see any progress. Im scared what will happen if I
want to migrate a medium loaded system runing a database.

Any tip for a parametrization?

Markus



what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
migrate on 1Gb without congesting it.
you could raise that if you have 10GB, or raise the bandwidth cap and
reduce max number of concurrent VMs, etc.


My migration network is IPoIB 10GBit. During our tests only one VM
was migrated.  Bandwidth cap or number of concurrent VMs has not
been changed after default install.

Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?


probably


And what settings do you suggest?


well, to begin with, 300MB/sec on 10GE (still allowing concurrent 
migrations)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-14 Thread Michal Skrivanek
 /usr/share/vdsm/sampling.py, line 226, in __call__
 retValue = self._function(*args, **kwargs)
   File /usr/share/vdsm/vm.py, line 509, in _highWrite
 if not vmDrive.blockDev or vmDrive.format != 'cow':
 AttributeError: 'Drive' object has no attribute 'format'
 
 How did you create this vm? was it from the UI? was it from a
 script?
 what
 are the parameters you used?
 
 Thanks,
 
 Dafna
 
 
 
 On 01/07/2014 04:34 PM, Neil wrote:
 
 Hi Elad,
 
 Thanks for assisting me, yes the same condition exists, if I try
 to
 migrate Tux it says The VM Tux is being migrated.
 
 
 Below are the details requested.
 
 
 [root@node01 ~]# virsh -r list
   IdName   State
 
   1 adam   running
 
 [root@node01 ~]# pgrep qemu
 11232
 [root@node01 ~]# vdsClient -s 0 list table
 63da7faa-f92a-4652-90f2-b6660a4fb7b3  11232  adam
 Up
 
 
 [root@node03 ~]# virsh -r list
   IdName   State
 
   7 tuxrunning
 
 [root@node03 ~]# pgrep qemu
 32333
 [root@node03 ~]# vdsClient -s 0 list table
 2736197b-6dc3-4155-9a29-9306ca64881d  32333  tux
 Up
 
 Thanks.
 
 Regards.
 
 Neil Wilson.
 
 
 On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon
 ebena...@redhat.com
 wrote:
 
 Is it still in the same condition?
 If yes, please add the outputs from both hosts for:
 
 #virsh -r list
 #pgrep qemu
 #vdsClient -s 0 list table (or 'vdsClient 0 list table' if
 you
 are
 working in insecure mode)
 
 
 Thnaks,
 
 Elad Ben Aharon
 RHEV-QE storage team
 
 
 
 
 - Original Message -
 From: Neil nwilson...@gmail.com
 To: users@ovirt.org
 Sent: Tuesday, January 7, 2014 4:21:43 PM
 Subject: [Users] Migration Failed
 
 Hi guys,
 
 I've tried to migrate a VM from one host(node03) to
 another(node01),
 and it failed to migrate, and the VM(tux) remained on the
 original
 host. I've now tried to migrate the same VM again, and it picks
 up
 that the previous migration is still in progress and refuses to
 migrate.
 
 I've checked for the KVM process on each of the hosts and the VM
 is
 definitely still running on node03 so there doesn't appear to be
 any
 chance of the VM trying to run on both hosts (which I've had
 before
 which is very scary).
 
 These are my versions... and attached are my engine.log and my
 vdsm.log
 
 Centos 6.5
 ovirt-iso-uploader-3.3.1-1.el6.noarch
 ovirt-host-deploy-1.1.2-1.el6.noarch
 ovirt-release-el6-9-1.noarch
 ovirt-engine-setup-3.3.1-2.el6.noarch
 ovirt-engine-3.3.1-2.el6.noarch
 ovirt-host-deploy-java-1.1.2-1.el6.noarch
 ovirt-image-uploader-3.3.1-1.el6.noarch
 ovirt-engine-dbscripts-3.3.1-2.el6.noarch
 ovirt-engine-cli-3.3.0.6-1.el6.noarch
 ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch
 ovirt-engine-userportal-3.3.1-2.el6.noarch
 ovirt-log-collector-3.3.1-1.el6.noarch
 ovirt-engine-tools-3.3.1-2.el6.noarch
 ovirt-engine-lib-3.3.1-2.el6.noarch
 ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch
 ovirt-engine-backend-3.3.1-2.el6.noarch
 ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
 ovirt-engine-restapi-3.3.1-2.el6.noarch
 
 
 vdsm-python-4.13.0-11.el6.x86_64
 vdsm-cli-4.13.0-11.el6.noarch
 vdsm-xmlrpc-4.13.0-11.el6.noarch
 vdsm-4.13.0-11.el6.x86_64
 vdsm-python-cpopen-4.13.0-11.el6.x86_64
 
 I've had a few issues with this particular installation in the
 past,
 as it's from a very old pre release of ovirt, then upgrading to
 the
 dreyou repo, then finally moving to the official Centos ovirt
 repo.
 
 Thanks, any help is greatly appreciated.
 
 Regards.
 
 Neil Wilson.
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 --
 Dafna Ron
 
 
 
 --
 Dafna Ron
 
 
 
 --
 Dafna Ron
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-13 Thread Neil
 call last):
 File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
   dom = findMethod(sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 171, in
 _findUnfetchedDomain
   raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'e9ab725d-69c1-4a59-b225-b995d095c289',)
 Thread-19::ERROR::2014-01-07


 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain)
 Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289
 monitoring information
 Traceback (most recent call last):
 File /usr/share/vdsm/storage/domainMonitor.py, line 190, in
 _monitorDomain
   self.domain = sdCache.produce(self.sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 98, in produce
   domain.getRealDomain()
 File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
   return self._cache._realProduce(self._sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
   domain = self._findDomain(sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
   dom = findMethod(sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 171, in
 _findUnfetchedDomain
   raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'e9ab725d-69c1-4a59-b225-b995d095c289',)
 Dummy-29013::DEBUG::2014-01-07

 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail)
 'dd


 if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox
 iflag=direct,fullblock count=1 bs=1024000' (cwd N
 one)

 3. The migration fails with libvirt error but we need the trace from
 the
 second log:

 Thread-1165153::DEBUG::2014-01-07
 13:39:42,451::sampling::292::vm.Vm::(stop)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection
 Thread-1163583::DEBUG::2014-01-07
 13:39:42,452::sampling::323::vm.Vm::(run)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished
 Thread-1165153::DEBUG::2014-01-07
 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper)
 Unknown
 libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no
 domain with matching uuid '63da7faa-f92a-4652-90f2-b6660
 a4fb7b3'


 4. But I am worried about this and would more info about this vm...

 Thread-247::ERROR::2014-01-07
 15:35:14,868::sampling::355::vm.Vm::(collect)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed:
 AdvancedStatsFunction _highWrite at 0x2ce0998
 Traceback (most recent call last):
 File /usr/share/vdsm/sampling.py, line 351, in collect
   statsFunction()
 File /usr/share/vdsm/sampling.py, line 226, in __call__
   retValue = self._function(*args, **kwargs)
 File /usr/share/vdsm/vm.py, line 509, in _highWrite
   if not vmDrive.blockDev or vmDrive.format != 'cow':
 AttributeError: 'Drive' object has no attribute 'format'

 How did you create this vm? was it from the UI? was it from a script?
 what
 are the parameters you used?

 Thanks,

 Dafna



 On 01/07/2014 04:34 PM, Neil wrote:

 Hi Elad,

 Thanks for assisting me, yes the same condition exists, if I try to
 migrate Tux it says The VM Tux is being migrated.


 Below are the details requested.


 [root@node01 ~]# virsh -r list
 IdName   State
 
 1 adam   running

 [root@node01 ~]# pgrep qemu
 11232
 [root@node01 ~]# vdsClient -s 0 list table
 63da7faa-f92a-4652-90f2-b6660a4fb7b3  11232  adam Up


 [root@node03 ~]# virsh -r list
 IdName   State
 
 7 tuxrunning

 [root@node03 ~]# pgrep qemu
 32333
 [root@node03 ~]# vdsClient -s 0 list table
 2736197b-6dc3-4155-9a29-9306ca64881d  32333  tux  Up

 Thanks.

 Regards.

 Neil Wilson.


 On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon ebena...@redhat.com
 wrote:

 Is it still in the same condition?
 If yes, please add the outputs from both hosts for:

 #virsh -r list
 #pgrep qemu
 #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you
 are
 working in insecure mode)


 Thnaks,

 Elad Ben Aharon
 RHEV-QE storage team




 - Original Message -
 From: Neil nwilson...@gmail.com
 To: users@ovirt.org
 Sent: Tuesday, January 7, 2014 4:21:43 PM
 Subject: [Users] Migration Failed

 Hi guys,

 I've tried to migrate a VM from one host(node03) to another(node01),
 and it failed to migrate, and the VM(tux) remained on the original
 host. I've now tried to migrate the same VM again, and it picks up
 that the previous migration is still in progress and refuses to
 migrate.

 I've checked for the KVM process on each of the hosts and the VM is
 definitely still running on node03 so there doesn't appear to be any
 chance of the VM trying to run on both hosts (which I've had before
 which is very

Re: [Users] Migration Failed

2014-01-13 Thread Michal Skrivanek
 libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no
 domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'
 
 hread-19::ERROR::2014-01-07
 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain)
 domain
 e9ab725d-69c1-4a59-b225-b995d095c289 not found
 Traceback (most recent call last):
File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
  dom = findMethod(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 171, in
 _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'e9ab725d-69c1-4a59-b225-b995d095c289',)
 Thread-19::ERROR::2014-01-07
 
 
 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain)
 Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289
 monitoring information
 Traceback (most recent call last):
File /usr/share/vdsm/storage/domainMonitor.py, line 190, in
 _monitorDomain
  self.domain = sdCache.produce(self.sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 98, in produce
  domain.getRealDomain()
File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
  return self._cache._realProduce(self._sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
  domain = self._findDomain(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
  dom = findMethod(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 171, in
 _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'e9ab725d-69c1-4a59-b225-b995d095c289',)
 Dummy-29013::DEBUG::2014-01-07
 
 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail)
 'dd
 
 
 if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox
 iflag=direct,fullblock count=1 bs=1024000' (cwd N
 one)
 
 3. The migration fails with libvirt error but we need the trace from
 the
 second log:
 
 Thread-1165153::DEBUG::2014-01-07
 13:39:42,451::sampling::292::vm.Vm::(stop)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection
 Thread-1163583::DEBUG::2014-01-07
 13:39:42,452::sampling::323::vm.Vm::(run)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished
 Thread-1165153::DEBUG::2014-01-07
 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper)
 Unknown
 libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no
 domain with matching uuid '63da7faa-f92a-4652-90f2-b6660
 a4fb7b3'
 
 
 4. But I am worried about this and would more info about this vm...
 
 Thread-247::ERROR::2014-01-07
 15:35:14,868::sampling::355::vm.Vm::(collect)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed:
 AdvancedStatsFunction _highWrite at 0x2ce0998
 Traceback (most recent call last):
File /usr/share/vdsm/sampling.py, line 351, in collect
  statsFunction()
File /usr/share/vdsm/sampling.py, line 226, in __call__
  retValue = self._function(*args, **kwargs)
File /usr/share/vdsm/vm.py, line 509, in _highWrite
  if not vmDrive.blockDev or vmDrive.format != 'cow':
 AttributeError: 'Drive' object has no attribute 'format'
 
 How did you create this vm? was it from the UI? was it from a script?
 what
 are the parameters you used?
 
 Thanks,
 
 Dafna
 
 
 
 On 01/07/2014 04:34 PM, Neil wrote:
 
 Hi Elad,
 
 Thanks for assisting me, yes the same condition exists, if I try to
 migrate Tux it says The VM Tux is being migrated.
 
 
 Below are the details requested.
 
 
 [root@node01 ~]# virsh -r list
IdName   State
 
1 adam   running
 
 [root@node01 ~]# pgrep qemu
 11232
 [root@node01 ~]# vdsClient -s 0 list table
 63da7faa-f92a-4652-90f2-b6660a4fb7b3  11232  adam Up
 
 
 [root@node03 ~]# virsh -r list
IdName   State
 
7 tuxrunning
 
 [root@node03 ~]# pgrep qemu
 32333
 [root@node03 ~]# vdsClient -s 0 list table
 2736197b-6dc3-4155-9a29-9306ca64881d  32333  tux  Up
 
 Thanks.
 
 Regards.
 
 Neil Wilson.
 
 
 On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon ebena...@redhat.com
 wrote:
 
 Is it still in the same condition?
 If yes, please add the outputs from both hosts for:
 
 #virsh -r list
 #pgrep qemu
 #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you
 are
 working in insecure mode)
 
 
 Thnaks,
 
 Elad Ben Aharon
 RHEV-QE storage team
 
 
 
 
 - Original Message -
 From: Neil nwilson...@gmail.com
 To: users@ovirt.org
 Sent: Tuesday, January 7, 2014 4:21:43 PM
 Subject: [Users] Migration Failed
 
 Hi guys,
 
 I've tried to migrate a VM from one host(node03) to another(node01),
 and it failed to migrate, and the VM(tux) remained on the original
 host. I've now tried to migrate

Re: [Users] Migration Failed

2014-01-13 Thread Neil
)
  vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function
  failed:
  AdvancedStatsFunction _highWrite at 0x2ce0998
  Traceback (most recent call last):
 File /usr/share/vdsm/sampling.py, line 351, in collect
   statsFunction()
 File /usr/share/vdsm/sampling.py, line 226, in __call__
   retValue = self._function(*args, **kwargs)
 File /usr/share/vdsm/vm.py, line 509, in _highWrite
   if not vmDrive.blockDev or vmDrive.format != 'cow':
  AttributeError: 'Drive' object has no attribute 'format'
 
  How did you create this vm? was it from the UI? was it from a
  script?
  what
  are the parameters you used?
 
  Thanks,
 
  Dafna
 
 
 
  On 01/07/2014 04:34 PM, Neil wrote:
 
  Hi Elad,
 
  Thanks for assisting me, yes the same condition exists, if I try
  to
  migrate Tux it says The VM Tux is being migrated.
 
 
  Below are the details requested.
 
 
  [root@node01 ~]# virsh -r list
 IdName   State
  
 1 adam   running
 
  [root@node01 ~]# pgrep qemu
  11232
  [root@node01 ~]# vdsClient -s 0 list table
  63da7faa-f92a-4652-90f2-b6660a4fb7b3  11232  adam
  Up
 
 
  [root@node03 ~]# virsh -r list
 IdName   State
  
 7 tuxrunning
 
  [root@node03 ~]# pgrep qemu
  32333
  [root@node03 ~]# vdsClient -s 0 list table
  2736197b-6dc3-4155-9a29-9306ca64881d  32333  tux
  Up
 
  Thanks.
 
  Regards.
 
  Neil Wilson.
 
 
  On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon
  ebena...@redhat.com
  wrote:
 
  Is it still in the same condition?
  If yes, please add the outputs from both hosts for:
 
  #virsh -r list
  #pgrep qemu
  #vdsClient -s 0 list table (or 'vdsClient 0 list table' if
  you
  are
  working in insecure mode)
 
 
  Thnaks,
 
  Elad Ben Aharon
  RHEV-QE storage team
 
 
 
 
  - Original Message -
  From: Neil nwilson...@gmail.com
  To: users@ovirt.org
  Sent: Tuesday, January 7, 2014 4:21:43 PM
  Subject: [Users] Migration Failed
 
  Hi guys,
 
  I've tried to migrate a VM from one host(node03) to
  another(node01),
  and it failed to migrate, and the VM(tux) remained on the
  original
  host. I've now tried to migrate the same VM again, and it picks
  up
  that the previous migration is still in progress and refuses to
  migrate.
 
  I've checked for the KVM process on each of the hosts and the VM
  is
  definitely still running on node03 so there doesn't appear to be
  any
  chance of the VM trying to run on both hosts (which I've had
  before
  which is very scary).
 
  These are my versions... and attached are my engine.log and my
  vdsm.log
 
  Centos 6.5
  ovirt-iso-uploader-3.3.1-1.el6.noarch
  ovirt-host-deploy-1.1.2-1.el6.noarch
  ovirt-release-el6-9-1.noarch
  ovirt-engine-setup-3.3.1-2.el6.noarch
  ovirt-engine-3.3.1-2.el6.noarch
  ovirt-host-deploy-java-1.1.2-1.el6.noarch
  ovirt-image-uploader-3.3.1-1.el6.noarch
  ovirt-engine-dbscripts-3.3.1-2.el6.noarch
  ovirt-engine-cli-3.3.0.6-1.el6.noarch
  ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch
  ovirt-engine-userportal-3.3.1-2.el6.noarch
  ovirt-log-collector-3.3.1-1.el6.noarch
  ovirt-engine-tools-3.3.1-2.el6.noarch
  ovirt-engine-lib-3.3.1-2.el6.noarch
  ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch
  ovirt-engine-backend-3.3.1-2.el6.noarch
  ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
  ovirt-engine-restapi-3.3.1-2.el6.noarch
 
 
  vdsm-python-4.13.0-11.el6.x86_64
  vdsm-cli-4.13.0-11.el6.noarch
  vdsm-xmlrpc-4.13.0-11.el6.noarch
  vdsm-4.13.0-11.el6.x86_64
  vdsm-python-cpopen-4.13.0-11.el6.x86_64
 
  I've had a few issues with this particular installation in the
  past,
  as it's from a very old pre release of ovirt, then upgrading to
  the
  dreyou repo, then finally moving to the official Centos ovirt
  repo.
 
  Thanks, any help is greatly appreciated.
 
  Regards.
 
  Neil Wilson.
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 
  --
  Dafna Ron
 
 
 
  --
  Dafna Ron
 
 
 
  --
  Dafna Ron
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-13 Thread Sven Kieske
Hi,

you may want to consider to enable your firewall
for security reasons.

The ports which nfs uses are configured under:

/etc/sysconfig/nfs

for EL 6.

There's no reason at all to run oVirt without
correct firewalling, not even in test environments
as you will want firewalling in production and
you have to test it anyway.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-09 Thread Neil
-07


 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain)
 Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289
 monitoring information
 Traceback (most recent call last):
 File /usr/share/vdsm/storage/domainMonitor.py, line 190, in
 _monitorDomain
   self.domain = sdCache.produce(self.sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 98, in produce
   domain.getRealDomain()
 File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
   return self._cache._realProduce(self._sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
   domain = self._findDomain(sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
   dom = findMethod(sdUUID)
 File /usr/share/vdsm/storage/sdc.py, line 171, in
 _findUnfetchedDomain
   raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'e9ab725d-69c1-4a59-b225-b995d095c289',)
 Dummy-29013::DEBUG::2014-01-07

 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail)
 'dd


 if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox
 iflag=direct,fullblock count=1 bs=1024000' (cwd N
 one)

 3. The migration fails with libvirt error but we need the trace from
 the
 second log:

 Thread-1165153::DEBUG::2014-01-07
 13:39:42,451::sampling::292::vm.Vm::(stop)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection
 Thread-1163583::DEBUG::2014-01-07
 13:39:42,452::sampling::323::vm.Vm::(run)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished
 Thread-1165153::DEBUG::2014-01-07
 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper)
 Unknown
 libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no
 domain with matching uuid '63da7faa-f92a-4652-90f2-b6660
 a4fb7b3'


 4. But I am worried about this and would more info about this vm...

 Thread-247::ERROR::2014-01-07
 15:35:14,868::sampling::355::vm.Vm::(collect)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed:
 AdvancedStatsFunction _highWrite at 0x2ce0998
 Traceback (most recent call last):
 File /usr/share/vdsm/sampling.py, line 351, in collect
   statsFunction()
 File /usr/share/vdsm/sampling.py, line 226, in __call__
   retValue = self._function(*args, **kwargs)
 File /usr/share/vdsm/vm.py, line 509, in _highWrite
   if not vmDrive.blockDev or vmDrive.format != 'cow':
 AttributeError: 'Drive' object has no attribute 'format'

 How did you create this vm? was it from the UI? was it from a script?
 what
 are the parameters you used?

 Thanks,

 Dafna



 On 01/07/2014 04:34 PM, Neil wrote:

 Hi Elad,

 Thanks for assisting me, yes the same condition exists, if I try to
 migrate Tux it says The VM Tux is being migrated.


 Below are the details requested.


 [root@node01 ~]# virsh -r list
 IdName   State
 
 1 adam   running

 [root@node01 ~]# pgrep qemu
 11232
 [root@node01 ~]# vdsClient -s 0 list table
 63da7faa-f92a-4652-90f2-b6660a4fb7b3  11232  adam Up


 [root@node03 ~]# virsh -r list
 IdName   State
 
 7 tuxrunning

 [root@node03 ~]# pgrep qemu
 32333
 [root@node03 ~]# vdsClient -s 0 list table
 2736197b-6dc3-4155-9a29-9306ca64881d  32333  tux  Up

 Thanks.

 Regards.

 Neil Wilson.


 On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon ebena...@redhat.com
 wrote:

 Is it still in the same condition?
 If yes, please add the outputs from both hosts for:

 #virsh -r list
 #pgrep qemu
 #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you
 are
 working in insecure mode)


 Thnaks,

 Elad Ben Aharon
 RHEV-QE storage team




 - Original Message -
 From: Neil nwilson...@gmail.com
 To: users@ovirt.org
 Sent: Tuesday, January 7, 2014 4:21:43 PM
 Subject: [Users] Migration Failed

 Hi guys,

 I've tried to migrate a VM from one host(node03) to another(node01),
 and it failed to migrate, and the VM(tux) remained on the original
 host. I've now tried to migrate the same VM again, and it picks up
 that the previous migration is still in progress and refuses to
 migrate.

 I've checked for the KVM process on each of the hosts and the VM is
 definitely still running on node03 so there doesn't appear to be any
 chance of the VM trying to run on both hosts (which I've had before
 which is very scary).

 These are my versions... and attached are my engine.log and my
 vdsm.log

 Centos 6.5
 ovirt-iso-uploader-3.3.1-1.el6.noarch
 ovirt-host-deploy-1.1.2-1.el6.noarch
 ovirt-release-el6-9-1.noarch
 ovirt-engine-setup-3.3.1-2.el6.noarch
 ovirt-engine-3.3.1-2.el6.noarch
 ovirt-host-deploy-java-1.1.2-1.el6.noarch
 ovirt-image-uploader-3.3.1-1.el6.noarch

Re: [Users] Migration Failed

2014-01-08 Thread Dafna Ron
: no
domain with matching uuid '63da7faa-f92a-4652-90f2-b6660
a4fb7b3'


4. But I am worried about this and would more info about this vm...

Thread-247::ERROR::2014-01-07 15:35:14,868::sampling::355::vm.Vm::(collect)
vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed:
AdvancedStatsFunction _highWrite at 0x2ce0998
Traceback (most recent call last):
   File /usr/share/vdsm/sampling.py, line 351, in collect
 statsFunction()
   File /usr/share/vdsm/sampling.py, line 226, in __call__
 retValue = self._function(*args, **kwargs)
   File /usr/share/vdsm/vm.py, line 509, in _highWrite
 if not vmDrive.blockDev or vmDrive.format != 'cow':
AttributeError: 'Drive' object has no attribute 'format'

How did you create this vm? was it from the UI? was it from a script? what
are the parameters you used?

Thanks,

Dafna



On 01/07/2014 04:34 PM, Neil wrote:

Hi Elad,

Thanks for assisting me, yes the same condition exists, if I try to
migrate Tux it says The VM Tux is being migrated.


Below are the details requested.


[root@node01 ~]# virsh -r list
   IdName   State

   1 adam   running

[root@node01 ~]# pgrep qemu
11232
[root@node01 ~]# vdsClient -s 0 list table
63da7faa-f92a-4652-90f2-b6660a4fb7b3  11232  adam Up


[root@node03 ~]# virsh -r list
   IdName   State

   7 tuxrunning

[root@node03 ~]# pgrep qemu
32333
[root@node03 ~]# vdsClient -s 0 list table
2736197b-6dc3-4155-9a29-9306ca64881d  32333  tux  Up

Thanks.

Regards.

Neil Wilson.


On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon ebena...@redhat.com
wrote:

Is it still in the same condition?
If yes, please add the outputs from both hosts for:

#virsh -r list
#pgrep qemu
#vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are
working in insecure mode)


Thnaks,

Elad Ben Aharon
RHEV-QE storage team




- Original Message -
From: Neil nwilson...@gmail.com
To: users@ovirt.org
Sent: Tuesday, January 7, 2014 4:21:43 PM
Subject: [Users] Migration Failed

Hi guys,

I've tried to migrate a VM from one host(node03) to another(node01),
and it failed to migrate, and the VM(tux) remained on the original
host. I've now tried to migrate the same VM again, and it picks up
that the previous migration is still in progress and refuses to
migrate.

I've checked for the KVM process on each of the hosts and the VM is
definitely still running on node03 so there doesn't appear to be any
chance of the VM trying to run on both hosts (which I've had before
which is very scary).

These are my versions... and attached are my engine.log and my vdsm.log

Centos 6.5
ovirt-iso-uploader-3.3.1-1.el6.noarch
ovirt-host-deploy-1.1.2-1.el6.noarch
ovirt-release-el6-9-1.noarch
ovirt-engine-setup-3.3.1-2.el6.noarch
ovirt-engine-3.3.1-2.el6.noarch
ovirt-host-deploy-java-1.1.2-1.el6.noarch
ovirt-image-uploader-3.3.1-1.el6.noarch
ovirt-engine-dbscripts-3.3.1-2.el6.noarch
ovirt-engine-cli-3.3.0.6-1.el6.noarch
ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch
ovirt-engine-userportal-3.3.1-2.el6.noarch
ovirt-log-collector-3.3.1-1.el6.noarch
ovirt-engine-tools-3.3.1-2.el6.noarch
ovirt-engine-lib-3.3.1-2.el6.noarch
ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch
ovirt-engine-backend-3.3.1-2.el6.noarch
ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
ovirt-engine-restapi-3.3.1-2.el6.noarch


vdsm-python-4.13.0-11.el6.x86_64
vdsm-cli-4.13.0-11.el6.noarch
vdsm-xmlrpc-4.13.0-11.el6.noarch
vdsm-4.13.0-11.el6.x86_64
vdsm-python-cpopen-4.13.0-11.el6.x86_64

I've had a few issues with this particular installation in the past,
as it's from a very old pre release of ovirt, then upgrading to the
dreyou repo, then finally moving to the official Centos ovirt repo.

Thanks, any help is greatly appreciated.

Regards.

Neil Wilson.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-08 Thread Neil
. The migration fails with libvirt error but we need the trace from the
 second log:

 Thread-1165153::DEBUG::2014-01-07
 13:39:42,451::sampling::292::vm.Vm::(stop)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection
 Thread-1163583::DEBUG::2014-01-07
 13:39:42,452::sampling::323::vm.Vm::(run)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished
 Thread-1165153::DEBUG::2014-01-07
 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper)
 Unknown
 libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no
 domain with matching uuid '63da7faa-f92a-4652-90f2-b6660
 a4fb7b3'


 4. But I am worried about this and would more info about this vm...

 Thread-247::ERROR::2014-01-07
 15:35:14,868::sampling::355::vm.Vm::(collect)
 vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed:
 AdvancedStatsFunction _highWrite at 0x2ce0998
 Traceback (most recent call last):
File /usr/share/vdsm/sampling.py, line 351, in collect
  statsFunction()
File /usr/share/vdsm/sampling.py, line 226, in __call__
  retValue = self._function(*args, **kwargs)
File /usr/share/vdsm/vm.py, line 509, in _highWrite
  if not vmDrive.blockDev or vmDrive.format != 'cow':
 AttributeError: 'Drive' object has no attribute 'format'

 How did you create this vm? was it from the UI? was it from a script?
 what
 are the parameters you used?

 Thanks,

 Dafna



 On 01/07/2014 04:34 PM, Neil wrote:

 Hi Elad,

 Thanks for assisting me, yes the same condition exists, if I try to
 migrate Tux it says The VM Tux is being migrated.


 Below are the details requested.


 [root@node01 ~]# virsh -r list
IdName   State
 
1 adam   running

 [root@node01 ~]# pgrep qemu
 11232
 [root@node01 ~]# vdsClient -s 0 list table
 63da7faa-f92a-4652-90f2-b6660a4fb7b3  11232  adam Up


 [root@node03 ~]# virsh -r list
IdName   State
 
7 tuxrunning

 [root@node03 ~]# pgrep qemu
 32333
 [root@node03 ~]# vdsClient -s 0 list table
 2736197b-6dc3-4155-9a29-9306ca64881d  32333  tux  Up

 Thanks.

 Regards.

 Neil Wilson.


 On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon ebena...@redhat.com
 wrote:

 Is it still in the same condition?
 If yes, please add the outputs from both hosts for:

 #virsh -r list
 #pgrep qemu
 #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are
 working in insecure mode)


 Thnaks,

 Elad Ben Aharon
 RHEV-QE storage team




 - Original Message -
 From: Neil nwilson...@gmail.com
 To: users@ovirt.org
 Sent: Tuesday, January 7, 2014 4:21:43 PM
 Subject: [Users] Migration Failed

 Hi guys,

 I've tried to migrate a VM from one host(node03) to another(node01),
 and it failed to migrate, and the VM(tux) remained on the original
 host. I've now tried to migrate the same VM again, and it picks up
 that the previous migration is still in progress and refuses to
 migrate.

 I've checked for the KVM process on each of the hosts and the VM is
 definitely still running on node03 so there doesn't appear to be any
 chance of the VM trying to run on both hosts (which I've had before
 which is very scary).

 These are my versions... and attached are my engine.log and my vdsm.log

 Centos 6.5
 ovirt-iso-uploader-3.3.1-1.el6.noarch
 ovirt-host-deploy-1.1.2-1.el6.noarch
 ovirt-release-el6-9-1.noarch
 ovirt-engine-setup-3.3.1-2.el6.noarch
 ovirt-engine-3.3.1-2.el6.noarch
 ovirt-host-deploy-java-1.1.2-1.el6.noarch
 ovirt-image-uploader-3.3.1-1.el6.noarch
 ovirt-engine-dbscripts-3.3.1-2.el6.noarch
 ovirt-engine-cli-3.3.0.6-1.el6.noarch
 ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch
 ovirt-engine-userportal-3.3.1-2.el6.noarch
 ovirt-log-collector-3.3.1-1.el6.noarch
 ovirt-engine-tools-3.3.1-2.el6.noarch
 ovirt-engine-lib-3.3.1-2.el6.noarch
 ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch
 ovirt-engine-backend-3.3.1-2.el6.noarch
 ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
 ovirt-engine-restapi-3.3.1-2.el6.noarch


 vdsm-python-4.13.0-11.el6.x86_64
 vdsm-cli-4.13.0-11.el6.noarch
 vdsm-xmlrpc-4.13.0-11.el6.noarch
 vdsm-4.13.0-11.el6.x86_64
 vdsm-python-cpopen-4.13.0-11.el6.x86_64

 I've had a few issues with this particular installation in the past,
 as it's from a very old pre release of ovirt, then upgrading to the
 dreyou repo, then finally moving to the official Centos ovirt repo.

 Thanks, any help is greatly appreciated.

 Regards.

 Neil Wilson.

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



 --
 Dafna Ron



 --
 Dafna Ron
___
Users

Re: [Users] Migration Failed

2014-01-07 Thread Elad Ben Aharon
Is it still in the same condition? 
If yes, please add the outputs from both hosts for:

#virsh -r list
#pgrep qemu
#vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are working 
in insecure mode)


Thnaks,
  
Elad Ben Aharon
RHEV-QE storage team




- Original Message -
From: Neil nwilson...@gmail.com
To: users@ovirt.org
Sent: Tuesday, January 7, 2014 4:21:43 PM
Subject: [Users] Migration Failed

Hi guys,

I've tried to migrate a VM from one host(node03) to another(node01),
and it failed to migrate, and the VM(tux) remained on the original
host. I've now tried to migrate the same VM again, and it picks up
that the previous migration is still in progress and refuses to
migrate.

I've checked for the KVM process on each of the hosts and the VM is
definitely still running on node03 so there doesn't appear to be any
chance of the VM trying to run on both hosts (which I've had before
which is very scary).

These are my versions... and attached are my engine.log and my vdsm.log

Centos 6.5
ovirt-iso-uploader-3.3.1-1.el6.noarch
ovirt-host-deploy-1.1.2-1.el6.noarch
ovirt-release-el6-9-1.noarch
ovirt-engine-setup-3.3.1-2.el6.noarch
ovirt-engine-3.3.1-2.el6.noarch
ovirt-host-deploy-java-1.1.2-1.el6.noarch
ovirt-image-uploader-3.3.1-1.el6.noarch
ovirt-engine-dbscripts-3.3.1-2.el6.noarch
ovirt-engine-cli-3.3.0.6-1.el6.noarch
ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch
ovirt-engine-userportal-3.3.1-2.el6.noarch
ovirt-log-collector-3.3.1-1.el6.noarch
ovirt-engine-tools-3.3.1-2.el6.noarch
ovirt-engine-lib-3.3.1-2.el6.noarch
ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch
ovirt-engine-backend-3.3.1-2.el6.noarch
ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
ovirt-engine-restapi-3.3.1-2.el6.noarch


vdsm-python-4.13.0-11.el6.x86_64
vdsm-cli-4.13.0-11.el6.noarch
vdsm-xmlrpc-4.13.0-11.el6.noarch
vdsm-4.13.0-11.el6.x86_64
vdsm-python-cpopen-4.13.0-11.el6.x86_64

I've had a few issues with this particular installation in the past,
as it's from a very old pre release of ovirt, then upgrading to the
dreyou repo, then finally moving to the official Centos ovirt repo.

Thanks, any help is greatly appreciated.

Regards.

Neil Wilson.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-07 Thread Neil
Hi Elad,

Thanks for assisting me, yes the same condition exists, if I try to
migrate Tux it says The VM Tux is being migrated.


Below are the details requested.


[root@node01 ~]# virsh -r list
 IdName   State

 1 adam   running

[root@node01 ~]# pgrep qemu
11232
[root@node01 ~]# vdsClient -s 0 list table
63da7faa-f92a-4652-90f2-b6660a4fb7b3  11232  adam Up


[root@node03 ~]# virsh -r list
 IdName   State

 7 tuxrunning

[root@node03 ~]# pgrep qemu
32333
[root@node03 ~]# vdsClient -s 0 list table
2736197b-6dc3-4155-9a29-9306ca64881d  32333  tux  Up

Thanks.

Regards.

Neil Wilson.


On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon ebena...@redhat.com wrote:
 Is it still in the same condition?
 If yes, please add the outputs from both hosts for:

 #virsh -r list
 #pgrep qemu
 #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are 
 working in insecure mode)


 Thnaks,

 Elad Ben Aharon
 RHEV-QE storage team




 - Original Message -
 From: Neil nwilson...@gmail.com
 To: users@ovirt.org
 Sent: Tuesday, January 7, 2014 4:21:43 PM
 Subject: [Users] Migration Failed

 Hi guys,

 I've tried to migrate a VM from one host(node03) to another(node01),
 and it failed to migrate, and the VM(tux) remained on the original
 host. I've now tried to migrate the same VM again, and it picks up
 that the previous migration is still in progress and refuses to
 migrate.

 I've checked for the KVM process on each of the hosts and the VM is
 definitely still running on node03 so there doesn't appear to be any
 chance of the VM trying to run on both hosts (which I've had before
 which is very scary).

 These are my versions... and attached are my engine.log and my vdsm.log

 Centos 6.5
 ovirt-iso-uploader-3.3.1-1.el6.noarch
 ovirt-host-deploy-1.1.2-1.el6.noarch
 ovirt-release-el6-9-1.noarch
 ovirt-engine-setup-3.3.1-2.el6.noarch
 ovirt-engine-3.3.1-2.el6.noarch
 ovirt-host-deploy-java-1.1.2-1.el6.noarch
 ovirt-image-uploader-3.3.1-1.el6.noarch
 ovirt-engine-dbscripts-3.3.1-2.el6.noarch
 ovirt-engine-cli-3.3.0.6-1.el6.noarch
 ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch
 ovirt-engine-userportal-3.3.1-2.el6.noarch
 ovirt-log-collector-3.3.1-1.el6.noarch
 ovirt-engine-tools-3.3.1-2.el6.noarch
 ovirt-engine-lib-3.3.1-2.el6.noarch
 ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch
 ovirt-engine-backend-3.3.1-2.el6.noarch
 ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
 ovirt-engine-restapi-3.3.1-2.el6.noarch


 vdsm-python-4.13.0-11.el6.x86_64
 vdsm-cli-4.13.0-11.el6.noarch
 vdsm-xmlrpc-4.13.0-11.el6.noarch
 vdsm-4.13.0-11.el6.x86_64
 vdsm-python-cpopen-4.13.0-11.el6.x86_64

 I've had a few issues with this particular installation in the past,
 as it's from a very old pre release of ovirt, then upgrading to the
 dreyou repo, then finally moving to the official Centos ovirt repo.

 Thanks, any help is greatly appreciated.

 Regards.

 Neil Wilson.

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-07 Thread Dafna Ron
 Aharon
RHEV-QE storage team




- Original Message -
From: Neil nwilson...@gmail.com
To: users@ovirt.org
Sent: Tuesday, January 7, 2014 4:21:43 PM
Subject: [Users] Migration Failed

Hi guys,

I've tried to migrate a VM from one host(node03) to another(node01),
and it failed to migrate, and the VM(tux) remained on the original
host. I've now tried to migrate the same VM again, and it picks up
that the previous migration is still in progress and refuses to
migrate.

I've checked for the KVM process on each of the hosts and the VM is
definitely still running on node03 so there doesn't appear to be any
chance of the VM trying to run on both hosts (which I've had before
which is very scary).

These are my versions... and attached are my engine.log and my vdsm.log

Centos 6.5
ovirt-iso-uploader-3.3.1-1.el6.noarch
ovirt-host-deploy-1.1.2-1.el6.noarch
ovirt-release-el6-9-1.noarch
ovirt-engine-setup-3.3.1-2.el6.noarch
ovirt-engine-3.3.1-2.el6.noarch
ovirt-host-deploy-java-1.1.2-1.el6.noarch
ovirt-image-uploader-3.3.1-1.el6.noarch
ovirt-engine-dbscripts-3.3.1-2.el6.noarch
ovirt-engine-cli-3.3.0.6-1.el6.noarch
ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch
ovirt-engine-userportal-3.3.1-2.el6.noarch
ovirt-log-collector-3.3.1-1.el6.noarch
ovirt-engine-tools-3.3.1-2.el6.noarch
ovirt-engine-lib-3.3.1-2.el6.noarch
ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch
ovirt-engine-backend-3.3.1-2.el6.noarch
ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
ovirt-engine-restapi-3.3.1-2.el6.noarch


vdsm-python-4.13.0-11.el6.x86_64
vdsm-cli-4.13.0-11.el6.noarch
vdsm-xmlrpc-4.13.0-11.el6.noarch
vdsm-4.13.0-11.el6.x86_64
vdsm-python-cpopen-4.13.0-11.el6.x86_64

I've had a few issues with this particular installation in the past,
as it's from a very old pre release of ovirt, then upgrading to the
dreyou repo, then finally moving to the official Centos ovirt repo.

Thanks, any help is greatly appreciated.

Regards.

Neil Wilson.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] migration failed

2012-02-19 Thread зоррыч
Thank you!
There was a wrong host on one of the nodes in  /etc/hostname 




-Original Message-
From: Nathan Stratton [mailto:nat...@robotics.net] 
Sent: Friday, February 17, 2012 8:27 PM
To: ??
Cc: users@ovirt.org
Subject: Re: [Users] migration failed

On Fri, 17 Feb 2012, ?? wrote:

 How do I fix it?
 I checked the host name on both nodes and found that they resolves 
 correctly (there is an entry in /etc/hostname).
 In DNS hostname is not registered (!)

Have you tried entering them all in /etc/hosts?


Nathan StrattonCTO, BlinkMind, Inc.
nathan at robotics.net nathan at blinkmind.com
http://www.robotics.nethttp://www.blinkmind.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] migration failed

2012-02-17 Thread ??????
Hi. 
There are 2 nodes:
10.2.20.8 and 10.1.20.7
Migrate virtual machines from 10.2.20.8 to 10.1.20.7 is successful.
But when trying to migrate from 10.2.20.8 to 10.1.20.7 error: 
Migration failed due to Error: Migration destination has an invalid hostname
(VM: 1, Source Host: 10.1.20.7).


In the logs on the vdsm.log 10.1.20.7:
Thread-5217 :: DEBUG :: 2012-02-17 11:07:08,435 :: clientIF :: 54 :: vds ::
(wrapper) [10.1.20.2] :: call migrate with ({'src': '10 .1 .20.7 ',' dst ':
'10 .2.20.8:54321', 'vmId': '616938ca-d34f-437d-9e10-760d55eeadf6 ',' method
':' online '},) {}
Thread-5217 :: DEBUG :: 2012-02-17 11:07:08,436 :: clientIF :: 357 :: vds ::
(migrate) {'src': '10 .1.20.7 ',' dst ': '10 .2. 20.8:54321 ',' vmId ':
'616938ca-d34f-437d-9e10-760d55eeadf6', 'method': 'online'}
Thread-5218 :: DEBUG :: 2012-02-17 11:07:08,437 :: vm :: 122 :: vm.Vm ::
(_setupVdsConnection) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` ::
Destination server is: https://10.2.20.8:54321
Thread-5217 :: DEBUG :: 2012-02-17 11:07:08,438 :: clientIF :: 59 :: vds ::
(wrapper) return migrate with {'status': {'message': 'Migration process
starting' , 'code': 0}}
Thread-5218 :: DEBUG :: 2012-02-17 11:07:08,438 :: vm :: 124 :: vm.Vm ::
(_setupVdsConnection) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` ::
Initiating connection with destination
Thread-5218 :: DEBUG :: 2012-02-17 11:07:08,490 :: vm :: 170 :: vm.Vm ::
(_prepareGuest) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` :: migration
Process begins
Thread-5218 :: DEBUG :: 2012-02-17 11:07:08,493 :: vm :: 217 :: vm.Vm ::
(run) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` :: migration semaphore
acquired
Thread-5218 :: ERROR :: 2012-02-17 11:07:08,517 :: vm :: 176 :: vm.Vm ::
(_recover) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` :: migration
destination error: Migration destination has an invalid hostname
Thread-5218 :: ERROR :: 2012-02-17 11:07:08,570 :: vm :: 231 :: vm.Vm ::
(run) vmId = `616938ca-d34f-437d-9e10-760d55eeadf6` :: Traceback (most
recent call last):
  File / usr / share / vdsm / vm.py, line 223, in run
self._startUnderlyingMigration ()
  File / usr / share / vdsm / libvirtvm.py, line 400, in
_startUnderlyingMigration
raise RuntimeError ('migration destination error:' + response ['status']
['message'])
RuntimeError: migration destination error: Migration destination has an
invalid hostname

Thread-5220 :: DEBUG :: 2012-02-17 11:07:09,299 :: clientIF :: 54 :: vds ::
(wrapper) [10.1.20.2] :: call getVmStats with ('616938ca-d34f-437d-
9e10-760d55eeadf6 ',) {}
Thread-5220 :: DEBUG :: 2012-02-17 11:07:09,300 :: clientIF :: 59 :: vds ::
(wrapper) return getVmStats with {'status': {' message ':' Done ',' code ':
0},' statsList ': [{' status ':' Up ',' username ':' Unknown ',' memUsage ':
'0', 'acpiEnable': 'true', 'pid': '8620 ',' displayIp ': '0', 'displayPort':
u'5903 ',' session ':' Unknown ',' displaySecurePort ': u'5904',
'timeOffset': '0 ',' clientIp ':'' , 'kvmEnable': 'true', 'network':
{u'vnet3 ': {' macAddr ': u'00: 1a: 4a: a8: 7a: 06', 'rxDropped': '0 ','
rxErrors' : '0 ',' txDropped ': '0', 'txRate': '0 .0 ',' rxRate ': '0 .0',
'txErrors': '0 ',' state ':' unknown ',' speed ':' 1000 ',' name ':
u'vnet3'}}, 'vmId': '616938ca-d34f-437d-9e10-760d55eeadf6 ','
monitorResponse ': '0', 'cpuUser': '0 .00 ',' disks': {u'vda ': {'
readLatency ': '0', 'apparentsize': '1073741824 ',' writeLatency ': '0',
'imageID': '4 b62aa22-c3e8-423e-b547-b4bc21c24ef7 ',' flushLatency ' : '0
',' readRate ': '0 .00', 'truesize': '1073745920 ',' writeRate ': '0 .00'}},
'boot': 'c', 'statsAge': '0 .05 ',' cpuIdle ' : '100 .00 ',' elapsedTime ':
'833', 'vmType': 'kvm', 'cpuSys': '0 .00', 'appsList': [], 'guestIPs':'','
displayType ':' qxl ' , 'nice':''}]}
Thread-5221 :: DEBUG :: 2012-02-17 11:07:09,317 :: clientIF :: 54 :: vds ::
(wrapper) [10.1.20.2] :: call migrateStatus with ('616938ca-d34f-437d-
9e10-760d55eeadf6 ',) {}
Thread-5221 :: DEBUG :: 2012-02-17 11:07:09,317 :: clientIF :: 59 :: vds ::
(wrapper) return migrateStatus with {'status': {' message ':' Migration
destination has an invalid hostname ',' code ': 39}}



In the logs on the vdsm.log 10.2.20.8:

Thread-1825::DEBUG::2012-02-17 11:09:06,431::clientIF::54::vds::(wrapper)
[10.1.20.7]::call getVmStats with ('616938ca-d34f-437d-9e10-760d55eeadf6',)
{}
Thread-1825::DEBUG::2012-02-17 11:09:06,432::clientIF::59::vds::(wrapper)
return getVmStats with {'status': {'message': 'Virtual machine does not
exist', 'code': 1}}
Thread-1826::DEBUG::2012-02-17 11:09:06,454::clientIF::54::vds::(wrapper)
[10.1.20.7]::call migrationCreate with ({'bridge': 'ovirtmgmt',
'acpiEnable': 'true', 'emulatedMachine': 'pc', 'afterMigrationStatus': 'Up',
'spiceSecureChannels': 'smain,sinputs', 'vmId':
'616938ca-d34f-437d-9e10-760d55eeadf6', 'transparentHugePages': 'true',
'displaySecurePort': '5904', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType':
'Conroe', 'custom': {}, 'migrationDest': 'libvirt', 'macAddr':
'00:1a:4a:a8:7a:06', 'boot': 'c', 'smp': '1',