[ovirt-users] migration failures - libvirtError - listen attribute must match address attribute of first listen element

2017-03-29 Thread Devin A. Bougie
We have a new 4.1.1 cluster setup.  Migration of VM's that have a console / 
graphics setup is failing.  Migration of VM's that run headless succeeds.

The red flag in vdsm.log on the source is:
libvirtError: unsupported configuration: graphics 'listen' attribute 
'192.168.55.82' must match 'address' attribute of first listen element (found 
'192.168.55.84')

This happens when the console is set to either VNC or SPICE.  Please see below 
for larger excerpts from vdsm.log on the source and destination.  Any help 
would be greatly appreciated.

Many thanks,
Devin

SOURCE:
--
2017-03-29 09:53:30,314-0400 INFO  (jsonrpc/5) [vdsm.api] START migrate 
args=(, {u'incomingLimit': 2, u'src': 
u'192.168.55.82', u'dstqemu': u'192.168.55.84', u'autoConverge': u'false', 
u'tunneled': u'false', u'en
ableGuestEvents': False, u'dst': u'192.168.55.84:54321', u'vmId': 
u'cf9c5dbf-3924-47c6-b323-22ac90a1f682', u'abortOnError': u'true', 
u'outgoingLimit': 2, u'compressed': u'false', u'maxBandwidth': 5000, u'method': 
u'online', 'mode': 'remote'}) k
wargs={} (api:37)
2017-03-29 09:53:30,315-0400 INFO  (jsonrpc/5) [vdsm.api] FINISH migrate 
return={'status': {'message': 'Migration in progress', 'code': 0}, 'progress': 
0} (api:43)
2017-03-29 09:53:30,315-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call 
VM.migrate succeeded in 0.00 seconds (__init__:515)
2017-03-29 09:53:30,444-0400 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:52494 
(protocoldetector:72)
2017-03-29 09:53:30,450-0400 INFO  (Reactor thread) [ProtocolDetector.Detector] 
Detected protocol stomp from ::1:52494 (protocoldetector:127)
2017-03-29 09:53:30,450-0400 INFO  (Reactor thread) [Broker.StompAdapter] 
Processing CONNECT request (stompreactor:102)
2017-03-29 09:53:30,451-0400 INFO  (JsonRpc (StompReactor)) 
[Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-03-29 09:53:30,628-0400 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
Host.getHardwareInfo succeeded in 0.00 seconds (__init__:515)
2017-03-29 09:53:30,630-0400 INFO  (jsonrpc/7) [dispatcher] Run and protect: 
repoStats(options=None) (logUtils:51)
2017-03-29 09:53:30,631-0400 INFO  (jsonrpc/7) [dispatcher] Run and protect: 
repoStats, Return response: {u'016ceee8-9117-4e8a-b611-f58f6763a098': {'code': 
0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000226545', 
'lastCheck': 
'3.2', 'valid': True}, u'2438f819-e7f5-4bb1-ad0d-5349fa371e6e': {'code': 0, 
'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000232943', 
'lastCheck': '3.1', 'valid': True}, u'48d4f45d-0bdd-4f4a-90b6-35efe2da935a': 
{'code': 0, 'actual
': True, 'version': 4, 'acquired': True, 'delay': '0.000612878', 'lastCheck': 
'8.3', 'valid': True}} (logUtils:54)
2017-03-29 09:53:30,631-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:515)
2017-03-29 09:53:30,701-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Creation of destination VM took: 
0 seconds (migration:455)
2017-03-29 09:53:30,701-0400 INFO  (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') starting migration to 
qemu+tls://192.168.55.84/system with miguri tcp://192.168.55.84 (migration:480)
2017-03-29 09:53:31,120-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') unsupported configuration: 
graphics 'listen' attribute '192.168.55.82' must match 'address' attribute of 
first listen element (found '1
92.168.55.84') (migration:287)
2017-03-29 09:53:31,206-0400 ERROR (migsrc/cf9c5dbf) [virt.vm] 
(vmId='cf9c5dbf-3924-47c6-b323-22ac90a1f682') Failed to migrate (migration:429)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in 
run
self._startUnderlyingMigration(time.time())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in 
_startUnderlyingMigration
self._perform_with_downtime_thread(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 555, in 
_perform_with_downtime_thread
self._perform_migration(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 528, in 
_perform_migration
self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, 
in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in 
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)
libvirtError: unsupported configuration: graphics 'listen' attribute 
'192.168.55.82' must match 'address' attribute of first list

Re: [ovirt-users] migration failures

2017-02-23 Thread Michal Skrivanek

> On 23 Feb 2017, at 17:08, Michael Watters  wrote:
> 
> I canceled the migration and manually moved the VM to another host
> running ovirt 4.0.  The source node was then able to set itself to
> maintenance mode without any errors. 
> 
> On 02/23/2017 10:46 AM, Michael Watters wrote:
>> 
>> On 02/23/2017 10:28 AM, Francesco Romani wrote:
>>> The load/save state errors are most often found when the two sides of
>>> the migration have different and incompatible version of QEMU.
>>> In turn, this is quite often a bug, because forward migrations (e.g.
>>> from 2.3.0 to 2.4.0) are always supported, for obvious upgrade needs.
>> I think you're on to something there.  The destination server is running
>> ovirt 3.6 while the source server is on 4.0.  The cluster compatibility
>> level is also set to 3.6 since I have not upgrade every host node yet.

that is supported and works when the versions are really the latest ones.
I suggest to check the repos, it might be you are not getting updates for 
CentOS or qemu-kvm-ev packages on either 3.6 or 4.0 side

>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failures

2017-02-23 Thread Michael Watters
I canceled the migration and manually moved the VM to another host
running ovirt 4.0.  The source node was then able to set itself to
maintenance mode without any errors. 

On 02/23/2017 10:46 AM, Michael Watters wrote:
>
> On 02/23/2017 10:28 AM, Francesco Romani wrote:
>> The load/save state errors are most often found when the two sides of
>> the migration have different and incompatible version of QEMU.
>> In turn, this is quite often a bug, because forward migrations (e.g.
>> from 2.3.0 to 2.4.0) are always supported, for obvious upgrade needs.
> I think you're on to something there.  The destination server is running
> ovirt 3.6 while the source server is on 4.0.  The cluster compatibility
> level is also set to 3.6 since I have not upgrade every host node yet.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failures

2017-02-23 Thread Michael Watters


On 02/23/2017 10:28 AM, Francesco Romani wrote:
>
> The load/save state errors are most often found when the two sides of
> the migration have different and incompatible version of QEMU.
> In turn, this is quite often a bug, because forward migrations (e.g.
> from 2.3.0 to 2.4.0) are always supported, for obvious upgrade needs.

I think you're on to something there.  The destination server is running
ovirt 3.6 while the source server is on 4.0.  The cluster compatibility
level is also set to 3.6 since I have not upgrade every host node yet.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failures

2017-02-23 Thread Michael Watters

On 02/23/2017 10:28 AM, Francesco Romani wrote:
> On 02/23/2017 04:20 PM, Michael Watters wrote:
>> I have an ovirt cluster running ovirt 4.0 and I am seeing several errors
>> when I attempt to put one of our nodes into maintenance mode.  The logs
>> on the source server show errors as follows.
>>
>> Feb 23 10:15:08 ovirt-node-production3.example.com libvirtd[18800]: 
>> operation aborted: migration job: canceled by client
>> Feb 23 10:15:08 ovirt-node-production3.example.com libvirtd[18800]: internal 
>> error: qemu unexpectedly closed the monitor: 2017-02-23T15:12:58.289459Z 
>> qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 2 3 4 5 6 7 8 9 10 
>> 11 12 13 14 15   
>> 
>>  
>>   2017-02-23T15:12:58.289684Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>> should be described in NUMA config   
>>  
>>  
>>  
>>   2017-02-23T15:15:07.889891Z qemu-kvm: Unknown combination of migration 
>> flags: 0
>>  
>>   2017-02-23T15:15:07.890821Z qemu-kvm: error while loading state section id 
>> 2(ram)
>>  
>>   2017-02-23T15:15:07.892357Z qemu-kvm: load of migration failed: Invalid 
>> argument
>>
>> This cluster does *not* have NUMA enabled so I am not sure why this
>> error is happening. 
> It's one implementation detail. NUMA is enabled transparently because it
> is required for memory hotplug support.
> It should be fully transparent.
>
>
>> Some migrations did succeed after being restarted
>> on a different host however I have two VMs that appear to be stuck.  Is
>> there a way to resolve this?
> The load/save state errors are most often found when the two sides of
> the migration have different and incompatible version of QEMU.
> In turn, this is quite often a bug, because forward migrations (e.g.
> from 2.3.0 to 2.4.0) are always supported, for obvious upgrade needs.
>
> So, which version of libvirt and qemu do you have on the sides of
> failing migration paths?
>
> Bests,
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failures

2017-02-23 Thread Francesco Romani
On 02/23/2017 04:20 PM, Michael Watters wrote:
> I have an ovirt cluster running ovirt 4.0 and I am seeing several errors
> when I attempt to put one of our nodes into maintenance mode.  The logs
> on the source server show errors as follows.
>
> Feb 23 10:15:08 ovirt-node-production3.example.com libvirtd[18800]: operation 
> aborted: migration job: canceled by client
> Feb 23 10:15:08 ovirt-node-production3.example.com libvirtd[18800]: internal 
> error: qemu unexpectedly closed the monitor: 2017-02-23T15:12:58.289459Z 
> qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 2 3 4 5 6 7 8 9 10 
> 11 12 13 14 15
>
>   
>  2017-02-23T15:12:58.289684Z qemu-kvm: warning: All CPU(s) up to maxcpus 
> should be described in NUMA config
>   
>
>   
>  2017-02-23T15:15:07.889891Z qemu-kvm: Unknown combination of migration 
> flags: 0
>   
>  2017-02-23T15:15:07.890821Z qemu-kvm: error while loading state section id 
> 2(ram)
>   
>  2017-02-23T15:15:07.892357Z qemu-kvm: load of migration failed: Invalid 
> argument
>
> This cluster does *not* have NUMA enabled so I am not sure why this
> error is happening. 

It's one implementation detail. NUMA is enabled transparently because it
is required for memory hotplug support.
It should be fully transparent.


> Some migrations did succeed after being restarted
> on a different host however I have two VMs that appear to be stuck.  Is
> there a way to resolve this?

The load/save state errors are most often found when the two sides of
the migration have different and incompatible version of QEMU.
In turn, this is quite often a bug, because forward migrations (e.g.
from 2.3.0 to 2.4.0) are always supported, for obvious upgrade needs.

So, which version of libvirt and qemu do you have on the sides of
failing migration paths?

Bests,

-- 
Francesco Romani
Red Hat Engineering Virtualization R & D
IRC: fromani

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] migration failures

2017-02-23 Thread Michael Watters
I have an ovirt cluster running ovirt 4.0 and I am seeing several errors
when I attempt to put one of our nodes into maintenance mode.  The logs
on the source server show errors as follows.

Feb 23 10:15:08 ovirt-node-production3.example.com libvirtd[18800]: operation 
aborted: migration job: canceled by client
Feb 23 10:15:08 ovirt-node-production3.example.com libvirtd[18800]: internal 
error: qemu unexpectedly closed the monitor: 2017-02-23T15:12:58.289459Z 
qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 2 3 4 5 6 7 8 9 10 11 
12 13 14 15 
  
   
2017-02-23T15:12:58.289684Z qemu-kvm: warning: All CPU(s) up to maxcpus should 
be described in NUMA config 


   
2017-02-23T15:15:07.889891Z qemu-kvm: Unknown combination of migration flags: 0
   
2017-02-23T15:15:07.890821Z qemu-kvm: error while loading state section id 
2(ram)
   
2017-02-23T15:15:07.892357Z qemu-kvm: load of migration failed: Invalid argument

This cluster does *not* have NUMA enabled so I am not sure why this
error is happening.  Some migrations did succeed after being restarted
on a different host however I have two VMs that appear to be stuck.  Is
there a way to resolve this?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users