[ovirt-users] Unable to start VM with SRIOV IB interface

2022-12-13 Thread oliveira
Hi everyone,

On a new oVirt 4.4.10.7-1.el8 deployment I am unable to star a VM with a SRIOV 
IB interface.
I always get the message:

Error while executing action:

login003:

Cannot run VM. There is no host that satisfies current scheduling 
constraints. See below for details:
The host nvirt002.lca.uc.pt did not satisfy internal filter HostDevice 
because the host does not provide requested host devices.
The host nvirt000.lca.uc.pt did not satisfy internal filter HostDevice 
because some of the required host devices are unavailable.
The host nvirt003.lca.uc.pt did not satisfy internal filter HostDevice 
because the host does not provide requested host devices.
The host nvirt005.lca.uc.pt did not satisfy internal filter HostDevice 
because the host does not provide requested host devices.
The host nvirt004.lca.uc.pt did not satisfy internal filter HostDevice 
because the host does not provide requested host devices.
The host nvirt001.lca.uc.pt did not satisfy internal filter HostDevice 
because the host does not provide requested host devices.

Given that I am trying to run it on login000 I guess the relevant error is that 
the host device is unavailable. I cannot find any other errors or messages that 
seem relevant to this on any other logs.
How do I even start debugging this? I have another deployment with 4.3 where it 
just works.


Best Regards and thank you,

Miguel Oliveira
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6NXDPOAHZBZONCWX64BQPJB4YDJ23JH/


[ovirt-users] Unable to start VM

2018-07-10 Thread Alex K
Hi all,

I did a routine maintenance today (updating the hosts) to ovirt cluster
(4.2) and I have one VM that was complaining about an invalid snapshot.
After shutdown of VM the VM is not able to start again, giving the error:

VM Gitlab is down with error. Exit message: Bad volume specification
{'serial': 'b6af2856-a164-484a-afe5-9836bbdd14e8', 'index': 0, 'iface':
'virtio', 'apparentsize': '51838976', 'specParams': {}, 'cache': 'none',
'imageID': 'b6af2856-a164-484a-afe5-9836bbdd14e8', 'truesize': '52011008',
'type': 'disk', 'domainID': '142bbde6-ef9d-4a52-b9da-2de533c1f1bd',
'reqsize': '0', 'format': 'cow', 'poolID':
'0001-0001-0001-0001-0311', 'device': 'disk', 'path':
'/rhev/data-center/0001-0001-0001-0001-0311/142bbde6-ef9d-4a52-b9da-2de533c1f1bd/images/b6af2856-a164-484a-afe5-9836bbdd14e8/f3125f62-c909-472f-919c-844e0b8c156d',
'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID':
'f3125f62-c909-472f-919c-844e0b8c156d', 'diskType': 'file', 'alias':
'ua-b6af2856-a164-484a-afe5-9836bbdd14e8', 'discard': False}.

I see also the following error:

VDSM command CopyImageVDS failed: Image is not a legal chain:
(u'b6af2856-a164-484a-afe5-9836bbdd14e8',)

Seems as a corrupt VM disk?

The VM had 3 snapshots. I was able to delete one from GUI then am not able
to delete the other two as the task fails. Generally I am not allowed to
clone, export or do sth to the VM.

Have you encountered sth similar. Any advice?

Thanx,
Alex
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FVOAT7EYTCCQCJSNFHPZ2ERTIJ3CVJGM/


[ovirt-users] Unable to start VM on host with OVS networking

2018-05-17 Thread Jonathan Dieter
I have a production ovirt setup that's gone through multiple updates over the 
years.  At some point when 4.0 or 4.1 came out, I switched from legacy 
networking to OVS, and everything worked perfectly until I upgraded to 4.2.  
Since I upgraded to 4.2, I've been getting messages that the networks were all 
out of sync, but everything continued working properly.

Today I tracked down the network sync problem, fixed it on one of my three 
hosts, and then attempted to start a VM on the host.  It refused to start with 
the error message: "Unable to add bridge ovirtmgmt port vnet0: Operation not 
supported".  From what I can tell, the xml being generated is still for the old 
legacy network.  I completely reinstalled the node, using the latest 4.2.3 node 
ISO image, and it still doesn't work.

In the cluster, the switch type is "OVS (Experimental)" (and this option can't 
be changed, apparently), the compatibility version is 4.2, the firewall type is 
firewalld and there's no "Default Network Provider".

I suspect that my upgrades have somehow left my system in half OVS/half legacy 
mode, but I'm not sure how to move it all the way to OVS mode and I don't want 
to mess with the other two hosts until I'm sure I've got it figured out.

My (compressed) vdsm.log is at https://www.lesbg.com/jdieter/vdsm.log.xz and my 
(compressed) supervdsm.log is at https://www.lesbg.com/jdieter/supervdsm.log.xz.

If anyone could point me in the right direction to get this fixed, I'd sure 
appreciate it.

Jonathan
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Unable to start VM after 4.2 upgrade

2018-05-17 Thread Ernest Beinrohr
Hi, I updated my engine and 1 host to 4.2 from 4.1 and now I cannot 
start a VM and get this error:


"UnsupportedType: Unsupported {} for ioTune".

My other 6 4.1 hosts are able to start VM normally with the new 4.2 engine.


My VM has this inside: total_bytes_sec="0" total_iops_sec="0" write_bytes_sec="0" 
write_iops_sec="100"/>


error:

1.
   2018-05-17 14:30:38,006+0200 ERROR (vm/0d53dd5d) [virt.vm]
   (vmId='0d53dd5d-ef16-4763-bbdc-2dc173087bf5') The vm start process
   failed (vm:943)
2.
   Traceback (most recent call last):
3.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
   872, in _startUnderlyingVm
4.
    self._run()
5.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
   2882, in _run
6.
    self._domDependentInit()
7.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
   2458, in _domDependentInit
8.
    self._vmDependentInit()
9.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
   2495, in _vmDependentInit
10.
    self._sync_metadata()
11.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
   5158, in _sync_metadata
12.
    self._md_desc.dump(self._dom)
13.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 509, in dump
14.
    md_xml = self._build_xml()
15.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 721, in _build_xml
16.
    md_elem = self._build_tree(namespace, namespace_uri)
17.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 711, in _build_tree
18.
    dev_elem = _dump_device(metadata_obj, data)
19.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 800, in _dump_device
20.
    elems.append(_dump_device_spec_params(md_obj, value))
21.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 866, in _dump_device_spec_params
22.
    spec_params_elem = md_obj.dump(_SPEC_PARAMS, **value)
23.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 229, in dump
24.
    _keyvalue_to_elem(self._add_ns(key), value, elem)
25.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 916, in _keyvalue_to_elem
26.
    raise UnsupportedType(key, value)
27.
   UnsupportedType: Unsupported {} for ioTune
28.
   2018-05-17 14:30:38,006+0200 INFO (vm/0d53dd5d) [virt.vm]
   (vmId='0d53dd5d-ef16-4763-bbdc-2dc173087bf5') Changed state to Down:
   Unsupported {} for ioTune (code=1) (vm:1683)
29.
   2018-05-17 14:30:38,007+0200 ERROR (vm/0d53dd5d) [root] FINISH
   thread  failed
   (concurrent:201)
30.
   Traceback (most recent call last):
31.
  File
   "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line
   194, in run
32.
    ret = func(*args, **kwargs)
33.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
   944, in _startUnderlyingVm
34.
    self.setDownStatus(ERROR, vmexitreason.GENERIC_ERROR, str(e))
35.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
   1685, in setDownStatus
36.
    self._update_metadata()
37.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
   5145, in _update_metadata
38.
    self._sync_metadata()
39.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
   5158, in _sync_metadata
40.
    self._md_desc.dump(self._dom)
41.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 509, in dump
42.
    md_xml = self._build_xml()
43.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 721, in _build_xml
44.
    md_elem = self._build_tree(namespace, namespace_uri)
45.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 711, in _build_tree
46.
    dev_elem = _dump_device(metadata_obj, data)
47.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 800, in _dump_device
48.
    elems.append(_dump_device_spec_params(md_obj, value))
49.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 866, in _dump_device_spec_params
50.
    spec_params_elem = md_obj.dump(_SPEC_PARAMS, **value)
51.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 229, in dump
52.
    _keyvalue_to_elem(self._add_ns(key), value, elem)
53.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/metadata.py",
   line 916, in _keyvalue_to_elem
54.
    raise UnsupportedType(key, value)
55.
   UnsupportedType: Unsupported {} for ioTune
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

2018-03-07 Thread Oliver Riesener

On 07.03.2018 17:22, Arik Hadas wrote:



On Wed, Mar 7, 2018 at 6:15 PM, Jan Siml > wrote:


Hello,

Enable network and disks on your VM than do:
Run -> ONCE Ok Ignore errors. Ok
Run
Cheeers


WTF! That worked.

Did you know, why this works and what happens in the
background? Is there a Bugzilla bug ID for this issue?

I figured it out, by they attempt to change the VM CPU family from old 
VM's, as a last try to get it work again.
After i had a live upgrade to 4.2 with running VM's and shutdown them 
down they all dead with disabled network and disks.

No delight to delete them all and recreate them, the other way.

Have a nice day.


BTW, here is the list of devices before:

engine=# select type, device, address, is_managed, is_plugged,
alias from vm_device where vm_id in (select vm_guid from vm_static
where vm_name='prod-hub-201');
    type    |    device     |  address                    |
is_managed | is_plugged |    alias

+---+--+++
 video      | qxl           |                    | t       | t   
      |
 controller | virtio-scsi   |                    | t       | t   
      |
 balloon    | memballoon    |                    | t       | f   
      | balloon0
 graphics   | spice         |                    | t       | t   
      |
 controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x,
type=pci, function=0x0} | t          | t       | virtio-serial0
 disk       | disk          | {slot=0x07, bus=0x00, domain=0x,
type=pci, function=0x0} | f          | t       | virtio-disk0
 memballoon | memballoon    | {slot=0x08, bus=0x00, domain=0x,
type=pci, function=0x0} | f          | t       | balloon0
 interface  | bridge        | {slot=0x03, bus=0x00, domain=0x,
type=pci, function=0x0} | f          | t       | net0
 interface  | bridge        | {slot=0x09, bus=0x00, domain=0x,
type=pci, function=0x0} | f          | t       | net1
 controller | scsi          | {slot=0x05, bus=0x00, domain=0x,
type=pci, function=0x0} | f          | t       | scsi0
 controller | ide           | {slot=0x01, bus=0x00, domain=0x,
type=pci, function=0x1} | f          | t         | ide
 controller | usb           | {slot=0x01, bus=0x00, domain=0x,
type=pci, function=0x2} | t          | t         | usb
 channel    | unix          | {bus=0, controller=0,
type=virtio-serial, port=1}            | f          | t         |
channel0
 channel    | unix          | {bus=0, controller=0,
type=virtio-serial, port=2}            | f          | t         |
channel1
 channel    | spicevmc      | {bus=0, controller=0,
type=virtio-serial, port=3}            | f          | t         |
channel2
 interface  | bridge        |                    | t       | t   
      | net1
 interface  | bridge        |                    | t       | t   
      | net0
 disk       | cdrom         |                    | t       | f   
      | ide0-1-0
 disk       | cdrom         | {bus=1, controller=0, type=drive,
target=0, unit=0}          | f          | t         | ide0-1-0
 disk       | disk          |                    | t       | t   
      | virtio-disk0
(20 rows)

and afterwards:

engine=# select type, device, address, is_managed, is_plugged,
alias from vm_device where vm_id in (select vm_guid from vm_static
where vm_name='prod-hub-201');
    type    |    device     |  address                    |
is_managed | is_plugged |    alias

+---+--+++
 channel    | spicevmc      | {type=virtio-serial, bus=0,
controller=0, port=3}            | f          | t | channel2
 channel    | unix          | {type=virtio-serial, bus=0,
controller=0, port=1}            | f          | t | channel0
 interface  | bridge        | {type=pci, slot=0x04, bus=0x00,
domain=0x, function=0x0} | t          | t       | net1
 controller | usb           | {type=pci, slot=0x01, bus=0x00,
domain=0x, function=0x2} | t          | t       | usb
 controller | virtio-serial | {type=pci, slot=0x06, bus=0x00,
domain=0x, function=0x0} | t          | t       | virtio-serial0
 interface  | bridge        | {type=pci, slot=0x03, bus=0x00,
domain=0x, function=0x0} | t          | t       | net0
 controller | virtio-scsi   | {type=pci, slot=0x05, bus=0x00,
domain=0x, function=0x0} | t          | t       | scsi0
 video      | qxl           | {type=pci, slot=0x02, bus=0x00,
domain=0x, function=0x0} | 

Re: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

2018-03-07 Thread Arik Hadas
On Wed, Mar 7, 2018 at 6:15 PM, Jan Siml  wrote:

> Hello,
>
> Enable network and disks on your VM than do:
>>> Run -> ONCE Ok Ignore errors. Ok
>>> Run
>>> Cheeers
>>>
>>
>> WTF! That worked.
>>
>> Did you know, why this works and what happens in the background? Is there
>> a Bugzilla bug ID for this issue?
>>
>
> BTW, here is the list of devices before:
>
> engine=# select type, device, address, is_managed, is_plugged, alias from
> vm_device where vm_id in (select vm_guid from vm_static where
> vm_name='prod-hub-201');
> type|device |   address
> | is_managed | is_plugged | alias
> +---+---
> ---+++
>  video  | qxl   || t  | t
> |
>  controller | virtio-scsi   || t  | t
> |
>  balloon| memballoon|| t  | f
> | balloon0
>  graphics   | spice || t  | t
> |
>  controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x,
> type=pci, function=0x0} | t  | t  | virtio-serial0
>  disk   | disk  | {slot=0x07, bus=0x00, domain=0x,
> type=pci, function=0x0} | f  | t  | virtio-disk0
>  memballoon | memballoon| {slot=0x08, bus=0x00, domain=0x,
> type=pci, function=0x0} | f  | t  | balloon0
>  interface  | bridge| {slot=0x03, bus=0x00, domain=0x,
> type=pci, function=0x0} | f  | t  | net0
>  interface  | bridge| {slot=0x09, bus=0x00, domain=0x,
> type=pci, function=0x0} | f  | t  | net1
>  controller | scsi  | {slot=0x05, bus=0x00, domain=0x,
> type=pci, function=0x0} | f  | t  | scsi0
>  controller | ide   | {slot=0x01, bus=0x00, domain=0x,
> type=pci, function=0x1} | f  | t  | ide
>  controller | usb   | {slot=0x01, bus=0x00, domain=0x,
> type=pci, function=0x2} | t  | t  | usb
>  channel| unix  | {bus=0, controller=0, type=virtio-serial,
> port=1}| f  | t  | channel0
>  channel| unix  | {bus=0, controller=0, type=virtio-serial,
> port=2}| f  | t  | channel1
>  channel| spicevmc  | {bus=0, controller=0, type=virtio-serial,
> port=3}| f  | t  | channel2
>  interface  | bridge|| t  | t
> | net1
>  interface  | bridge|| t  | t
> | net0
>  disk   | cdrom || t  | f
> | ide0-1-0
>  disk   | cdrom | {bus=1, controller=0, type=drive, target=0,
> unit=0}  | f  | t  | ide0-1-0
>  disk   | disk  || t  | t
> | virtio-disk0
> (20 rows)
>
> and afterwards:
>
> engine=# select type, device, address, is_managed, is_plugged, alias from
> vm_device where vm_id in (select vm_guid from vm_static where
> vm_name='prod-hub-201');
> type|device |   address
> | is_managed | is_plugged | alias
> +---+---
> ---+++
>  channel| spicevmc  | {type=virtio-serial, bus=0, controller=0,
> port=3}| f  | t  | channel2
>  channel| unix  | {type=virtio-serial, bus=0, controller=0,
> port=1}| f  | t  | channel0
>  interface  | bridge| {type=pci, slot=0x04, bus=0x00,
> domain=0x, function=0x0} | t  | t  | net1
>  controller | usb   | {type=pci, slot=0x01, bus=0x00,
> domain=0x, function=0x2} | t  | t  | usb
>  controller | virtio-serial | {type=pci, slot=0x06, bus=0x00,
> domain=0x, function=0x0} | t  | t  | virtio-serial0
>  interface  | bridge| {type=pci, slot=0x03, bus=0x00,
> domain=0x, function=0x0} | t  | t  | net0
>  controller | virtio-scsi   | {type=pci, slot=0x05, bus=0x00,
> domain=0x, function=0x0} | t  | t  | scsi0
>  video  | qxl   | {type=pci, slot=0x02, bus=0x00,
> domain=0x, function=0x0} | t  | t  | video0
>  channel| unix  | {type=virtio-serial, bus=0, controller=0,
> port=2}| f  | t  | channel1
>  balloon| memballoon|| t  | t
> | balloon0
>  graphics   | spice || t  | t
> |
>  disk   | cdrom || t  | f
> | ide0-1-0
>  disk   | disk  | {type=pci, slot=0x07, bus=0x00,
> domain=0x, function=0x0} | t  | t  | virtio-disk0
> (13 

Re: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

2018-03-07 Thread Jan Siml

Hello,


Enable network and disks on your VM than do:
Run -> ONCE Ok Ignore errors. Ok
Run
Cheeers


WTF! That worked.

Did you know, why this works and what happens in the background? Is 
there a Bugzilla bug ID for this issue?


BTW, here is the list of devices before:

engine=# select type, device, address, is_managed, is_plugged, alias 
from vm_device where vm_id in (select vm_guid from vm_static where 
vm_name='prod-hub-201');
type|device |   address 
   | is_managed | is_plugged | alias

+---+--+++
 video  | qxl   | 
   | t  | t  |
 controller | virtio-scsi   | 
   | t  | t  |
 balloon| memballoon| 
   | t  | f  | balloon0
 graphics   | spice | 
   | t  | t  |
 controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x, 
type=pci, function=0x0} | t  | t  | virtio-serial0
 disk   | disk  | {slot=0x07, bus=0x00, domain=0x, 
type=pci, function=0x0} | f  | t  | virtio-disk0
 memballoon | memballoon| {slot=0x08, bus=0x00, domain=0x, 
type=pci, function=0x0} | f  | t  | balloon0
 interface  | bridge| {slot=0x03, bus=0x00, domain=0x, 
type=pci, function=0x0} | f  | t  | net0
 interface  | bridge| {slot=0x09, bus=0x00, domain=0x, 
type=pci, function=0x0} | f  | t  | net1
 controller | scsi  | {slot=0x05, bus=0x00, domain=0x, 
type=pci, function=0x0} | f  | t  | scsi0
 controller | ide   | {slot=0x01, bus=0x00, domain=0x, 
type=pci, function=0x1} | f  | t  | ide
 controller | usb   | {slot=0x01, bus=0x00, domain=0x, 
type=pci, function=0x2} | t  | t  | usb
 channel| unix  | {bus=0, controller=0, type=virtio-serial, 
port=1}| f  | t  | channel0
 channel| unix  | {bus=0, controller=0, type=virtio-serial, 
port=2}| f  | t  | channel1
 channel| spicevmc  | {bus=0, controller=0, type=virtio-serial, 
port=3}| f  | t  | channel2
 interface  | bridge| 
   | t  | t  | net1
 interface  | bridge| 
   | t  | t  | net0
 disk   | cdrom | 
   | t  | f  | ide0-1-0
 disk   | cdrom | {bus=1, controller=0, type=drive, 
target=0, unit=0}  | f  | t  | ide0-1-0
 disk   | disk  | 
   | t  | t  | virtio-disk0

(20 rows)

and afterwards:

engine=# select type, device, address, is_managed, is_plugged, alias 
from vm_device where vm_id in (select vm_guid from vm_static where 
vm_name='prod-hub-201');
type|device |   address 
   | is_managed | is_plugged | alias

+---+--+++
 channel| spicevmc  | {type=virtio-serial, bus=0, controller=0, 
port=3}| f  | t  | channel2
 channel| unix  | {type=virtio-serial, bus=0, controller=0, 
port=1}| f  | t  | channel0
 interface  | bridge| {type=pci, slot=0x04, bus=0x00, 
domain=0x, function=0x0} | t  | t  | net1
 controller | usb   | {type=pci, slot=0x01, bus=0x00, 
domain=0x, function=0x2} | t  | t  | usb
 controller | virtio-serial | {type=pci, slot=0x06, bus=0x00, 
domain=0x, function=0x0} | t  | t  | virtio-serial0
 interface  | bridge| {type=pci, slot=0x03, bus=0x00, 
domain=0x, function=0x0} | t  | t  | net0
 controller | virtio-scsi   | {type=pci, slot=0x05, bus=0x00, 
domain=0x, function=0x0} | t  | t  | scsi0
 video  | qxl   | {type=pci, slot=0x02, bus=0x00, 
domain=0x, function=0x0} | t  | t  | video0
 channel| unix  | {type=virtio-serial, bus=0, controller=0, 
port=2}| f  | t  | channel1
 balloon| memballoon| 
   | t  | t  | balloon0
 graphics   | spice | 
   | t  | t  |
 disk   | cdrom | 
   | t  | f  | ide0-1-0
 disk   | disk  | {type=pci, slot=0x07, bus=0x00, 
domain=0x, function=0x0} | t  | t  | virtio-disk0

(13 rows)

Regards

Jan
___
Users mailing list
Users@ovirt.org

Re: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

2018-03-07 Thread Oliver Riesener
Hi,

Enable network and disks on your VM than do:
Run -> ONCE Ok Ignore errors. Ok
Run 
Cheeers
Olri


> Am 07.03.2018 um 16:49 schrieb Arik Hadas :
> 
> 
> 
>> On Wed, Mar 7, 2018 at 5:32 PM, Jan Siml  wrote:
>> Hello Arik,
>> 
>> 
>>> we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9)
>>> and afterwards all nodes too. The cluster compatibility level
>>> has been set to 4.2.
>>> 
>>> Now we can't start a VM after it has been powered off. The only
>>> hint we found in engine.log is:
>>> 
>>> 2018-03-07 14:51:52,504+01 INFO
>>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-25)
>>> [f855b54a-56d9-4708-8a67-5609438ddadb] START,
>>> UpdateVmDynamicDataVDSCommand(
>>> UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
>>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304',
>>> vmDynamic='org.ovirt.engine.co
>>> 
>>> re.common.businessentities.VmDynamic@491983e9'}),
>>> 
>>> log id: 7d49849e
>>> 2018-03-07 14:51:52,509+01 INFO
>>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-25)
>>> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH,
>>> UpdateVmDynamicDataVDSCommand, log id: 7d49849e
>>> 2018-03-07 14:51:52,531+01 INFO
>>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-25)
>>> [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand(
>>> 
>>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
>>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
>>> [prod-hub-201]'}), log id: 4af1f227
>>> 2018-03-07 14:51:52,533+01 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-25)
>>> [f855b54a-56d9-4708-8a67-5609438ddadb] START,
>>> CreateBrokerVDSCommand(HostName = prod-node-210,
>>> 
>>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
>>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
>>> [prod-hub-201]'}), log id: 71dcc8e7
>>> 2018-03-07 14:51:52,545+01 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-25)
>>> [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in
>>> 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host:
>>> 'prod-node-210': null
>>> 2018-03-07 14:51:52,546+01 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-25)
>>> [f855b54a-56d9-4708-8a67-5609438ddadb] Command
>>> 'CreateBrokerVDSCommand(HostName = prod-node-210,
>>> 
>>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
>>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
>>> [prod-hub-201]'})' execution failed: null
>>> 2018-03-07 14:51:52,546+01 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-25)
>>> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH,
>>> CreateBrokerVDSCommand, log id: 71dcc8e7
>>> 2018-03-07 14:51:52,546+01 ERROR
>>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
>>> (EE-ManagedThreadFactory-engine-Thread-25) [f855b5
>>> 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM:
>>> java.lang.NullPointerException
>>> at
>>> 
>>> org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066)
>>>   [vdsbroker.jar:]
>>> 
>>> [...]
>>> 
>>> But this doesn't lead us to the root cause. I haven't found any
>>> matching bug tickets in release notes for upcoming 4.2.1. Can
>>> anyone help here?
>>> 
>>> 
>>> What's the mac address of that VM?
>>> You can find it in the UI or with:
>>> 
>>> select mac_addr from vm_interface where vm_guid in (select vm_guid
>>> from vm_static where vm_name='');
>>> 
>>> 
>>> Actually, different question - does this VM has unplugged network interface?
>> 
>> The VM has two NICs. Both are plugged.
>> 
>> The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 00:1a:4a:36:01:67 for 
>> nic2.
> 
> OK, those seem like two valid mac addresses so maybe something is wrong with 
> the vm devices.
> Could you please provide the output of:
> select type, device, address, is_managed, is_plugged, alias from vm_device 
> where vm_id in (select vm_guid from vm_static where vm_name='');
>  
>> 
>> Regards
>> 
>> Jan
> 
> 

Re: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

2018-03-07 Thread Jan Siml

Hello Arik,


         we have upgrade one of our oVirt engines to 4.2.1 (from
4.1.9)
         and afterwards all nodes too. The cluster compatibility
level
         has been set to 4.2.

         Now we can't start a VM after it has been powered off.
The only
         hint we found in engine.log is:

         2018-03-07 14:51:52,504+01 INFO

[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]

         (EE-ManagedThreadFactory-engine-Thread-25)
         [f855b54a-56d9-4708-8a67-5609438ddadb] START,
         UpdateVmDynamicDataVDSCommand(
         UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
         vmId='a7bc4124-06cb-4909-9389-bcf727df1304',
         vmDynamic='org.ovirt.engine.co 

re.common.businessentities.VmDynamic@491983e9'}),


         log id: 7d49849e
         2018-03-07 14:51:52,509+01 INFO

[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]

         (EE-ManagedThreadFactory-engine-Thread-25)
         [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH,
         UpdateVmDynamicDataVDSCommand, log id: 7d49849e
         2018-03-07 14:51:52,531+01 INFO
         [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
         (EE-ManagedThreadFactory-engine-Thread-25)
         [f855b54a-56d9-4708-8a67-5609438ddadb] START,
CreateVDSCommand(

CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',

         vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
         [prod-hub-201]'}), log id: 4af1f227
         2018-03-07 14:51:52,533+01 INFO

[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]

         (EE-ManagedThreadFactory-engine-Thread-25)
         [f855b54a-56d9-4708-8a67-5609438ddadb] START,
         CreateBrokerVDSCommand(HostName = prod-node-210,

CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',

         vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
         [prod-hub-201]'}), log id: 71dcc8e7
         2018-03-07 14:51:52,545+01 ERROR

[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]

         (EE-ManagedThreadFactory-engine-Thread-25)
         [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in
         'CreateBrokerVDS' method, for vds: 'prod-node-210'; host:
         'prod-node-210': null
         2018-03-07 14:51:52,546+01 ERROR

[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]

         (EE-ManagedThreadFactory-engine-Thread-25)
         [f855b54a-56d9-4708-8a67-5609438ddadb] Command
         'CreateBrokerVDSCommand(HostName = prod-node-210,

CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',

         vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
         [prod-hub-201]'})' execution failed: null
         2018-03-07 14:51:52,546+01 INFO

[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]

         (EE-ManagedThreadFactory-engine-Thread-25)
         [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH,
         CreateBrokerVDSCommand, log id: 71dcc8e7
         2018-03-07 14:51:52,546+01 ERROR
         [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
         (EE-ManagedThreadFactory-engine-Thread-25) [f855b5
         4a-56d9-4708-8a67-5609438ddadb] Failed to create VM:
         java.lang.NullPointerException
         at

org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066)

           [vdsbroker.jar:]

         [...]

         But this doesn't lead us to the root cause. I haven't
found any
         matching bug tickets in release notes for upcoming
4.2.1. Can
         anyone help here?


     What's the mac address of that VM?
     You can find it in the UI or with:

     select mac_addr from vm_interface where vm_guid in (select
vm_guid
     from vm_static where vm_name='');


Actually, different question - does this VM has unplugged
network interface?


The VM has two NICs. Both are plugged.

The MAC addresses are 00:1a:4a:18:01:52 for nic1 and
00:1a:4a:36:01:67 for nic2.


OK, those seem like two valid mac addresses so maybe something is wrong 
with the vm devices.

Could you please 

Re: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

2018-03-07 Thread Arik Hadas
On Wed, Mar 7, 2018 at 5:32 PM, Jan Siml  wrote:

> Hello Arik,
>
>
> we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9)
>> and afterwards all nodes too. The cluster compatibility level
>> has been set to 4.2.
>>
>> Now we can't start a VM after it has been powered off. The only
>> hint we found in engine.log is:
>>
>> 2018-03-07 14:51:52,504+01 INFO
>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] START,
>> UpdateVmDynamicDataVDSCommand(
>> UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304',
>> vmDynamic='org.ovirt.engine.co
>> re.common.businessentities.VmDyn
>> amic@491983e9'}),
>>
>> log id: 7d49849e
>> 2018-03-07 14:51:52,509+01 INFO
>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH,
>> UpdateVmDynamicDataVDSCommand, log id: 7d49849e
>> 2018-03-07 14:51:52,531+01 INFO
>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand(
>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-
>> 4f7abd1f402b',
>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
>> [prod-hub-201]'}), log id: 4af1f227
>> 2018-03-07 14:51:52,533+01 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo
>> mmand]
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] START,
>> CreateBrokerVDSCommand(HostName = prod-node-210,
>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-
>> 4f7abd1f402b',
>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
>> [prod-hub-201]'}), log id: 71dcc8e7
>> 2018-03-07 14:51:52,545+01 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo
>> mmand]
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in
>> 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host:
>> 'prod-node-210': null
>> 2018-03-07 14:51:52,546+01 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo
>> mmand]
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] Command
>> 'CreateBrokerVDSCommand(HostName = prod-node-210,
>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-
>> 4f7abd1f402b',
>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
>> [prod-hub-201]'})' execution failed: null
>> 2018-03-07 14:51:52,546+01 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo
>> mmand]
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH,
>> CreateBrokerVDSCommand, log id: 71dcc8e7
>> 2018-03-07 14:51:52,546+01 ERROR
>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-25) [f855b5
>> 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM:
>> java.lang.NullPointerException
>> at
>> org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlB
>> uilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066)
>>   [vdsbroker.jar:]
>>
>> [...]
>>
>> But this doesn't lead us to the root cause. I haven't found any
>> matching bug tickets in release notes for upcoming 4.2.1. Can
>> anyone help here?
>>
>>
>> What's the mac address of that VM?
>> You can find it in the UI or with:
>>
>> select mac_addr from vm_interface where vm_guid in (select vm_guid
>> from vm_static where vm_name='');
>>
>>
>> Actually, different question - does this VM has unplugged network
>> interface?
>>
>
> The VM has two NICs. Both are plugged.
>
> The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 00:1a:4a:36:01:67 for
> nic2.
>

OK, those seem like two valid mac addresses so maybe something is wrong
with the vm devices.
Could you please provide the output of:

select type, device, address, is_managed, is_plugged, alias from vm_device
where vm_id in (select vm_guid from vm_static where vm_name='');


>
> Regards
>
> Jan
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

2018-03-07 Thread Jan Siml

Hello Arik,



we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9)
and afterwards all nodes too. The cluster compatibility level
has been set to 4.2.

Now we can't start a VM after it has been powered off. The only
hint we found in engine.log is:

2018-03-07 14:51:52,504+01 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-25)
[f855b54a-56d9-4708-8a67-5609438ddadb] START,
UpdateVmDynamicDataVDSCommand(
UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
vmId='a7bc4124-06cb-4909-9389-bcf727df1304',
vmDynamic='org.ovirt.engine.co

re.common.businessentities.VmDynamic@491983e9'}),
log id: 7d49849e
2018-03-07 14:51:52,509+01 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-25)
[f855b54a-56d9-4708-8a67-5609438ddadb] FINISH,
UpdateVmDynamicDataVDSCommand, log id: 7d49849e
2018-03-07 14:51:52,531+01 INFO
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-25)
[f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand(

CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
[prod-hub-201]'}), log id: 4af1f227
2018-03-07 14:51:52,533+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-25)
[f855b54a-56d9-4708-8a67-5609438ddadb] START,
CreateBrokerVDSCommand(HostName = prod-node-210,

CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
[prod-hub-201]'}), log id: 71dcc8e7
2018-03-07 14:51:52,545+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-25)
[f855b54a-56d9-4708-8a67-5609438ddadb] Failed in
'CreateBrokerVDS' method, for vds: 'prod-node-210'; host:
'prod-node-210': null
2018-03-07 14:51:52,546+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-25)
[f855b54a-56d9-4708-8a67-5609438ddadb] Command
'CreateBrokerVDSCommand(HostName = prod-node-210,

CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
[prod-hub-201]'})' execution failed: null
2018-03-07 14:51:52,546+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-25)
[f855b54a-56d9-4708-8a67-5609438ddadb] FINISH,
CreateBrokerVDSCommand, log id: 71dcc8e7
2018-03-07 14:51:52,546+01 ERROR
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-25) [f855b5
4a-56d9-4708-8a67-5609438ddadb] Failed to create VM:
java.lang.NullPointerException
at

org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066)
  [vdsbroker.jar:]

[...]

But this doesn't lead us to the root cause. I haven't found any
matching bug tickets in release notes for upcoming 4.2.1. Can
anyone help here?


What's the mac address of that VM?
You can find it in the UI or with:

select mac_addr from vm_interface where vm_guid in (select vm_guid
from vm_static where vm_name='');


Actually, different question - does this VM has unplugged network interface?


The VM has two NICs. Both are plugged.

The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 
00:1a:4a:36:01:67 for nic2.


Regards

Jan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

2018-03-07 Thread Arik Hadas
On Wed, Mar 7, 2018 at 5:20 PM, Arik Hadas  wrote:

>
>
> On Wed, Mar 7, 2018 at 4:11 PM, Jan Siml  wrote:
>
>> Hello,
>>
>> we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and
>> afterwards all nodes too. The cluster compatibility level has been set to
>> 4.2.
>>
>> Now we can't start a VM after it has been powered off. The only hint we
>> found in engine.log is:
>>
>> 2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbrok
>> er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] START,
>> UpdateVmDynamicDataVDSCommand( 
>> UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic='
>> org.ovirt.engine.core.common.businessentities.VmDynamic@491983e9'}), log
>> id: 7d49849e
>> 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbrok
>> er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH,
>> UpdateVmDynamicDataVDSCommand, log id: 7d49849e
>> 2018-03-07 14:51:52,531+01 INFO 
>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-25) 
>> [f855b54a-56d9-4708-8a67-5609438ddadb]
>> START, CreateVDSCommand( CreateVDSCommandParameters:{ho
>> stId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}),
>> log id: 4af1f227
>> 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbrok
>> er.vdsbroker.CreateBrokerVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] START,
>> CreateBrokerVDSCommand(HostName = prod-node-210,
>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}),
>> log id: 71dcc8e7
>> 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbrok
>> er.vdsbroker.CreateBrokerVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS'
>> method, for vds: 'prod-node-210'; host: 'prod-node-210': null
>> 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbrok
>> er.vdsbroker.CreateBrokerVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] Command
>> 'CreateBrokerVDSCommand(HostName = prod-node-210,
>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
>> [prod-hub-201]'})' execution failed: null
>> 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbrok
>> er.vdsbroker.CreateBrokerVDSCommand] 
>> (EE-ManagedThreadFactory-engine-Thread-25)
>> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand,
>> log id: 71dcc8e7
>> 2018-03-07 14:51:52,546+01 ERROR 
>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-25) [f855b5
>> 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM:
>> java.lang.NullPointerException
>> at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlB
>> uilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066)
>>  [vdsbroker.jar:]
>>
>> [...]
>>
>> But this doesn't lead us to the root cause. I haven't found any matching
>> bug tickets in release notes for upcoming 4.2.1. Can anyone help here?
>>
>
> What's the mac address of that VM?
> You can find it in the UI or with:
>
> select mac_addr from vm_interface where vm_guid in (select vm_guid from
> vm_static where vm_name='');
>

Actually, different question - does this VM has unplugged network interface?


>
>
>>
>> Kind regards
>>
>> Jan Siml
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

2018-03-07 Thread Arik Hadas
On Wed, Mar 7, 2018 at 4:11 PM, Jan Siml  wrote:

> Hello,
>
> we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and
> afterwards all nodes too. The cluster compatibility level has been set to
> 4.2.
>
> Now we can't start a VM after it has been powered off. The only hint we
> found in engine.log is:
>
> 2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbrok
> er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25)
> [f855b54a-56d9-4708-8a67-5609438ddadb] START,
> UpdateVmDynamicDataVDSCommand( 
> UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic='
> org.ovirt.engine.core.common.businessentities.VmDynamic@491983e9'}), log
> id: 7d49849e
> 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbrok
> er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25)
> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH,
> UpdateVmDynamicDataVDSCommand, log id: 7d49849e
> 2018-03-07 14:51:52,531+01 INFO 
> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-25) 
> [f855b54a-56d9-4708-8a67-5609438ddadb]
> START, CreateVDSCommand( CreateVDSCommandParameters:{ho
> stId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}),
> log id: 4af1f227
> 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.CreateBrokerVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-25)
> [f855b54a-56d9-4708-8a67-5609438ddadb] START,
> CreateBrokerVDSCommand(HostName = prod-node-210,
> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}),
> log id: 71dcc8e7
> 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.CreateBrokerVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-25)
> [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS'
> method, for vds: 'prod-node-210'; host: 'prod-node-210': null
> 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.CreateBrokerVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-25)
> [f855b54a-56d9-4708-8a67-5609438ddadb] Command
> 'CreateBrokerVDSCommand(HostName = prod-node-210,
> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b',
> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM
> [prod-hub-201]'})' execution failed: null
> 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbrok
> er.vdsbroker.CreateBrokerVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-25)
> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand,
> log id: 71dcc8e7
> 2018-03-07 14:51:52,546+01 ERROR 
> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-25) [f855b5
> 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM:
> java.lang.NullPointerException
> at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlB
> uilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066)
>  [vdsbroker.jar:]
>
> [...]
>
> But this doesn't lead us to the root cause. I haven't found any matching
> bug tickets in release notes for upcoming 4.2.1. Can anyone help here?
>

What's the mac address of that VM?
You can find it in the UI or with:

select mac_addr from vm_interface where vm_guid in (select vm_guid from
vm_static where vm_name='');


>
> Kind regards
>
> Jan Siml
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

2018-03-07 Thread Jan Siml

Hello,

we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and 
afterwards all nodes too. The cluster compatibility level has been set 
to 4.2.


Now we can't start a VM after it has been powered off. The only hint we 
found in engine.log is:


2018-03-07 14:51:52,504+01 INFO 
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-25) 
[f855b54a-56d9-4708-8a67-5609438ddadb] START, 
UpdateVmDynamicDataVDSCommand( 
UpdateVmDynamicDataVDSCommandParameters:{hostId='null', 
vmId='a7bc4124-06cb-4909-9389-bcf727df1304', 
vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@491983e9'}), 
log id: 7d49849e
2018-03-07 14:51:52,509+01 INFO 
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-25) 
[f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, 
UpdateVmDynamicDataVDSCommand, log id: 7d49849e
2018-03-07 14:51:52,531+01 INFO 
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-25) 
[f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( 
CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', 
vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), 
log id: 4af1f227
2018-03-07 14:51:52,533+01 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-25) 
[f855b54a-56d9-4708-8a67-5609438ddadb] START, 
CreateBrokerVDSCommand(HostName = prod-node-210, 
CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', 
vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), 
log id: 71dcc8e7
2018-03-07 14:51:52,545+01 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-25) 
[f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' 
method, for vds: 'prod-node-210'; host: 'prod-node-210': null
2018-03-07 14:51:52,546+01 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-25) 
[f855b54a-56d9-4708-8a67-5609438ddadb] Command 
'CreateBrokerVDSCommand(HostName = prod-node-210, 
CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', 
vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM

[prod-hub-201]'})' execution failed: null
2018-03-07 14:51:52,546+01 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-25) 
[f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, 
log id: 71dcc8e7
2018-03-07 14:51:52,546+01 ERROR 
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-25) [f855b5
4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: 
java.lang.NullPointerException
at 
org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066)

 [vdsbroker.jar:]

[...]

But this doesn't lead us to the root cause. I haven't found any matching 
bug tickets in release notes for upcoming 4.2.1. Can anyone help here?


Kind regards

Jan Siml
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Simone Tiraboschi
On Tue, Feb 13, 2018 at 12:28 PM, Simone Tiraboschi 
wrote:

>
>
> On Tue, Feb 13, 2018 at 12:26 PM, Stefano Danzi  wrote:
>
>> Strange thing.
>>
>> after "vdsm-client Host getCapabilities" command, cluster cpu type become
>> "Intel Sandybridge Family". Same thing for all VMs.
>>
>
> Can you please share engine.log ?
>

OK, I found a specific patch for that issue:
https://gerrit.ovirt.org/#/c/86913/
but the patch didn't landed
in ovirt-engine-dbscripts-4.2.1.6-1.el7.centos.noarch so every 4.2.0 ->
4.2.1 upgrade will result in that issue if the cluster CPU family is not in
  Intel Nehalem Family-IBRS
  Intel Nehalem-IBRS Family
  Intel Westmere-IBRS Family
  Intel SandyBridge-IBRS Family
  Intel Haswell-noTSX-IBRS Family
  Intel Haswell-IBRS Family
  Intel Broadwell-noTSX-IBRS Family
  Intel Broadwell-IBRS Family
  Intel Skylake Family
  Intel Skylake-IBRS Family
as in your case.

Let's see if we can have a quick respin.


>
>
>> Now I can run VMs.
>>
>> Il 13/02/2018 11:28, Simone Tiraboschi ha scritto:
>>
>> Ciao Stefano,
>> we have to properly indagate this: thanks for the report.
>>
>> Can you please attach from your host the output of
>> - grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf
>> - vdsm-client Host getCapabilities
>>
>> Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1
>> upgrade?
>>
>>
>>
>> On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi  wrote:
>>
>>> Hello!
>>>
>>> In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any
>>> VM.
>>> Hosted engine starts regularly.
>>>
>>> I have a sigle host with Hosted Engine.
>>>
>>> Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
>>>
>>> When I start any VM I get this error: "The CPU type of the cluster is
>>> unknown. Its possible to change the cluster cpu or set a different one per
>>> VM."
>>>
>>> All VMs have " Guest CPU Type: N/D"
>>>
>>> Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu
>>> type before the upgrade), my CPU should be Ivy Bridge but it isn't in the
>>> dropdown list.
>>>
>>> If I try to select a similar cpu (SandyBridge IBRS) I get an error. I
>>> can't chage cluster cpu type when I have running hosts with a lower CPU
>>> type.
>>> I can't put host in maintenance because  hosted engine is running on it.
>>>
>>> How I can solve?
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>> --
>>
>> Stefano Danzi
>> Responsabile ICT
>>
>> HAWAI ITALIA S.r.l.
>> Via Forte Garofolo, 16
>> 37057 S. Giovanni Lupatoto Verona Italia
>>
>> P. IVA 01680700232
>>
>> tel. +39/045/8266400 <+39%20045%20826%206400>
>> fax +39/045/8266401 <+39%20045%20826%206401>
>> Web www.hawai.it
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Simone Tiraboschi
On Tue, Feb 13, 2018 at 12:26 PM, Stefano Danzi  wrote:

> Strange thing.
>
> after "vdsm-client Host getCapabilities" command, cluster cpu type become
> "Intel Sandybridge Family". Same thing for all VMs.
>

Can you please share engine.log ?


> Now I can run VMs.
>
> Il 13/02/2018 11:28, Simone Tiraboschi ha scritto:
>
> Ciao Stefano,
> we have to properly indagate this: thanks for the report.
>
> Can you please attach from your host the output of
> - grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf
> - vdsm-client Host getCapabilities
>
> Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1
> upgrade?
>
>
>
> On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi  wrote:
>
>> Hello!
>>
>> In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM.
>> Hosted engine starts regularly.
>>
>> I have a sigle host with Hosted Engine.
>>
>> Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
>>
>> When I start any VM I get this error: "The CPU type of the cluster is
>> unknown. Its possible to change the cluster cpu or set a different one per
>> VM."
>>
>> All VMs have " Guest CPU Type: N/D"
>>
>> Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu type
>> before the upgrade), my CPU should be Ivy Bridge but it isn't in the
>> dropdown list.
>>
>> If I try to select a similar cpu (SandyBridge IBRS) I get an error. I
>> can't chage cluster cpu type when I have running hosts with a lower CPU
>> type.
>> I can't put host in maintenance because  hosted engine is running on it.
>>
>> How I can solve?
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> --
>
> Stefano Danzi
> Responsabile ICT
>
> HAWAI ITALIA S.r.l.
> Via Forte Garofolo, 16
> 37057 S. Giovanni Lupatoto Verona Italia
>
> P. IVA 01680700232
>
> tel. +39/045/8266400 <+39%20045%20826%206400>
> fax +39/045/8266401 <+39%20045%20826%206401>
> Web www.hawai.it
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Stefano Danzi

Strange thing.

after "vdsm-client Host getCapabilities" command, cluster cpu type 
become "Intel Sandybridge Family". Same thing for all VMs.

Now I can run VMs.

Il 13/02/2018 11:28, Simone Tiraboschi ha scritto:

Ciao Stefano,
we have to properly indagate this: thanks for the report.

Can you please attach from your host the output of
- grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf
- vdsm-client Host getCapabilities

Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1 
upgrade?




On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi > wrote:


Hello!

In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start
any VM.
Hosted engine starts regularly.

I have a sigle host with Hosted Engine.

Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz

When I start any VM I get this error: "The CPU type of the cluster
is unknown. Its possible to change the cluster cpu or set a
different one per VM."

All VMs have " Guest CPU Type: N/D"

Cluster now has CPU Type "Intel Conroe Family" (I don't remember
cpu type before the upgrade), my CPU should be Ivy Bridge but it
isn't in the dropdown list.

If I try to select a similar cpu (SandyBridge IBRS) I get an
error. I can't chage cluster cpu type when I have running hosts
with a lower CPU type.
I can't put host in maintenance because  hosted engine is running
on it.

How I can solve?

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--

Stefano Danzi
Responsabile ICT

HAWAI ITALIA S.r.l.
Via Forte Garofolo, 16
37057 S. Giovanni Lupatoto Verona Italia

P. IVA 01680700232

tel. +39/045/8266400
fax +39/045/8266401
Web www.hawai.it

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Simone Tiraboschi
Ciao Stefano,
we have to properly indagate this: thanks for the report.

Can you please attach from your host the output of
- grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf
- vdsm-client Host getCapabilities

Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1
upgrade?



On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi  wrote:

> Hello!
>
> In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM.
> Hosted engine starts regularly.
>
> I have a sigle host with Hosted Engine.
>
> Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
>
> When I start any VM I get this error: "The CPU type of the cluster is
> unknown. Its possible to change the cluster cpu or set a different one per
> VM."
>
> All VMs have " Guest CPU Type: N/D"
>
> Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu type
> before the upgrade), my CPU should be Ivy Bridge but it isn't in the
> dropdown list.
>
> If I try to select a similar cpu (SandyBridge IBRS) I get an error. I
> can't chage cluster cpu type when I have running hosts with a lower CPU
> type.
> I can't put host in maintenance because  hosted engine is running on it.
>
> How I can solve?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Stefano Danzi

Hello!

In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM.
Hosted engine starts regularly.

I have a sigle host with Hosted Engine.

Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz

When I start any VM I get this error: "The CPU type of the cluster is 
unknown. Its possible to change the cluster cpu or set a different one 
per VM."


All VMs have " Guest CPU Type: N/D"

Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu 
type before the upgrade), my CPU should be Ivy Bridge but it isn't in 
the dropdown list.


If I try to select a similar cpu (SandyBridge IBRS) I get an error. I 
can't chage cluster cpu type when I have running hosts with a lower CPU 
type.

I can't put host in maintenance because  hosted engine is running on it.

How I can solve?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users