[ovirt-users] Update single node environment from 4.3.3 to 4.3.5 problem

2019-08-22 Thread Gianluca Cecchi
Hello,
after updating hosted engine from 4.3.3 to 4.3.5 and then the only host
composing the environment (plain CentOS 7.6) it seems it is not able to
start vdsm daemons

kernel installed with update is kernel-3.10.0-957.27.2.el7.x86_64
Same problem also if using previous running kernel
3.10.0-957.12.2.el7.x86_64

[root@ovirt01 vdsm]# uptime
 00:50:08 up 25 min,  3 users,  load average: 0.60, 0.67, 0.60
[root@ovirt01 vdsm]#

[root@ovirt01 vdsm]# systemctl status vdsmd -l
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/etc/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
   Active: failed (Result: start-limit) since Fri 2019-08-23 00:37:27 CEST;
7s ago
  Process: 25810 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
--pre-start (code=exited, status=1/FAILURE)

Aug 23 00:37:27 ovirt01.mydomain systemd[1]: Failed to start Virtual
Desktop Server Manager.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: Unit vdsmd.service entered
failed state.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: vdsmd.service failed.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: vdsmd.service holdoff time
over, scheduling restart.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: Stopped Virtual Desktop Server
Manager.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: start request repeated too
quickly for vdsmd.service
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: Failed to start Virtual
Desktop Server Manager.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: Unit vdsmd.service entered
failed state.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: vdsmd.service failed.
[root@ovirt01 vdsm]#

[root@ovirt01 vdsm]# pwd
/var/log/vdsm
[root@ovirt01 vdsm]# ll -t | head
total 118972
-rw-r--r--. 1 root root 3406465 Aug 23 00:25 supervdsm.log
-rw-r--r--. 1 root root   73621 Aug 23 00:25 upgrade.log
-rw-r--r--. 1 vdsm kvm0 Aug 23 00:01 vdsm.log
-rw-r--r--. 1 vdsm kvm   538480 Aug 22 23:46 vdsm.log.1.xz
-rw-r--r--. 1 vdsm kvm   187486 Aug 22 23:46 mom.log
-rw-r--r--. 1 vdsm kvm   621320 Aug 22 22:01 vdsm.log.2.xz
-rw-r--r--. 1 root root  374464 Aug 22 22:00 supervdsm.log.1.xz
-rw-r--r--. 1 vdsm kvm  2097122 Aug 22 21:53 mom.log.1
-rw-r--r--. 1 vdsm kvm   636212 Aug 22 20:01 vdsm.log.3.xz
[root@ovirt01 vdsm]#

link to upgrade.log contents here:
https://drive.google.com/file/d/17jtX36oH1hlbNUAiVhdBkVDbd28QegXG/view?usp=sharing

link to supervdsm.log (in gzip format) here:
https://drive.google.com/file/d/1l61ePU-eFS_xVHEAHnJthzTTnTyzu0MP/view?usp=sharing

It seems that since update I get these kind of lines inside it...
restore-net::DEBUG::2019-08-22
23:56:38,591::cmdutils::133::root::(exec_cmd) /sbin/tc filter del dev eth0
pref 5000 (cwd None)
restore-net::DEBUG::2019-08-22
23:56:38,595::cmdutils::141::root::(exec_cmd) FAILED:  = 'RTNETLINK
answers: Invalid argument\nWe have an error talking to the kernel\n'; 
= 2

[root@ovirt01 vdsm]# systemctl status supervdsmd -l
● supervdsmd.service - Auxiliary vdsm service for running helper functions
as root
   Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static;
vendor preset: enabled)
   Active: active (running) since Fri 2019-08-23 00:25:17 CEST; 23min ago
 Main PID: 4540 (supervdsmd)
Tasks: 3
   CGroup: /system.slice/supervdsmd.service
   └─4540 /usr/bin/python2 /usr/share/vdsm/supervdsmd --sockfile
/var/run/vdsm/svdsm.sock

Aug 23 00:25:17 ovirt01.mydomain systemd[1]: Started Auxiliary vdsm service
for running helper functions as root.
[root@ovirt01 vdsm]#

[root@ovirt01 vdsm]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast master
ovirtmgmt state UP group default qlen 1000
link/ether b8:ae:ed:7f:17:11 brd ff:ff:ff:ff:ff:ff
3: wlan0:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 00:c2:c6:a4:18:c5 brd ff:ff:ff:ff:ff:ff
4: ovs-system:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 36:21:c1:5e:70:aa brd ff:ff:ff:ff:ff:ff
5: br-int:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 46:d8:db:81:41:4e brd ff:ff:ff:ff:ff:ff
22: ovirtmgmt:  mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether b8:ae:ed:7f:17:11 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.211/24 brd 192.168.1.255 scope global ovirtmgmt
   valid_lft forever preferred_lft forever
[root@ovirt01 vdsm]#

[root@ovirt01 vdsm]# ip route show
default via 192.168.1.1 dev ovirtmgmt
192.168.1.0/24 dev ovirtmgmt proto kernel scope link src 192.168.1.211
[root@ovirt01 vdsm]#

[root@ovirt01 vdsm]# brctl show
bridge name bridge id STP enabled interfaces
ovirtmgmt 8000.b8aeed7f1711 no eth0
[root@ovirt01 vdsm]#

[root@ovirt01 vdsm]# systemctl status openvswitch
● openvswitch.service - Open vSwitch
   Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled;
vendor preset: disabled)
   Active: active (exited) 

[ovirt-users] Re: Evaluate oVirt to remove VMware

2019-08-22 Thread dsalade
Thanks Paul,

Hey Paul,

Thanks for the reply!

Not really sure here, I read the oVirt 3.0 pdf and it says you need to enable 
LACP for Cisco switches.
This is really not becoming a learning setup any longer, just a headache.

Tried multiple ways and still no luck... the next way I am trying is this:
https://www.ovirt.org/develop/networking/bonding-vlan-bridge.html 

But not hoping any longer.
Not sure if it will work or not, but going to give it a go.

Thanks!!!
Allen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7ADQYAFSVXBGDMOL3VLF43W54HX3A7GT/


[ovirt-users] Re: Need to enable STP on ovirt bridges

2019-08-22 Thread Curtis E. Combs Jr.
Thanks, I'm just going to revert back to bridges.

On Thu, Aug 22, 2019 at 11:50 AM Dominik Holler  wrote:
>
>
>
> On Thu, Aug 22, 2019 at 3:06 PM Curtis E. Combs Jr.  
> wrote:
>>
>> Seems like the STP options are so common and necessary that it would
>> be a priority over seldom-used bridge_opts. I know what STP is and I'm
>> not even a networking guy - never even heard of half of the
>> bridge_opts that have switches in the UI.
>>
>> Anyway. I wanted to try the openvswitches, so I reinstalled all of my
>> nodes and used "openvswitch (Technology Preview)" as the engine-setup
>> option for the first host. I made a new Cluster for my nodes, added
>> them all to the new cluster, created a new "logical network" for the
>> internal network and attached it to the internal network ports.
>>
>> Now, when I go to create a new VM, I don't even have either the
>> ovirtmgmt switch OR the internal switch as an option. The drop-down is
>> empy as if I don't have any vnic-profiles.
>>
>
> openvswitch clusters are limited to ovn networks.
> You can create one like described in
> https://www.ovirt.org/documentation/admin-guide/chap-External_Providers.html#connecting-an-ovn-network-to-a-physical-network
>
>
>>
>> On Thu, Aug 22, 2019 at 7:34 AM Tony Pearce  wrote:
>> >
>> > Hi Dominik, would you mind sharing the use case for stp via API Only? I am 
>> > keen to know this.
>> > Thanks
>> >
>> >
>> > On Thu., 22 Aug. 2019, 19:24 Dominik Holler,  wrote:
>> >>
>> >>
>> >>
>> >> On Thu, Aug 22, 2019 at 1:08 PM Miguel Duarte de Mora Barroso 
>> >>  wrote:
>> >>>
>> >>> On Sat, Aug 17, 2019 at 11:27 AM  wrote:
>> >>> >
>> >>> > Hello. I have been trying to figure out an issue for a very long time.
>> >>> > That issue relates to the ethernet and 10gb fc links that I have on my
>> >>> > cluster being disabled any time a migration occurs.
>> >>> >
>> >>> > I believe this is because I need to have STP turned on in order to
>> >>> > participate with the switch. However, there does not seem to be any
>> >>> > way to tell oVirt to stop turning it off! Very frustrating.
>> >>> >
>> >>> > After entering a cronjob that enables stp on all bridges every 1
>> >>> > minute, the migration issue disappears
>> >>> >
>> >>> > Is there any way at all to do without this cronjob and set STP to be
>> >>> > ON without having to resort to such a silly solution?
>> >>>
>> >>> Vdsm exposes a per bridge STP knob that you can use for this. By
>> >>> default it is set to false, which is probably why you had to use this
>> >>> shenanigan.
>> >>>
>> >>> You can, for instance:
>> >>>
>> >>> # show present state
>> >>> [vagrant@vdsm ~]$ ip a
>> >>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>> >>> group default qlen 1000
>> >>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> >>> inet 127.0.0.1/8 scope host lo
>> >>>valid_lft forever preferred_lft forever
>> >>> 2: eth0:  mtu 1500 qdisc pfifo_fast
>> >>> state UP group default qlen 1000
>> >>> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
>> >>> 3: eth1:  mtu 1500 qdisc pfifo_fast
>> >>> state UP group default qlen 1000
>> >>> link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
>> >>> inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute 
>> >>> eth1
>> >>>valid_lft forever preferred_lft forever
>> >>> inet6 fe80::5054:ff:fe83:5b6f/64 scope link
>> >>>valid_lft forever preferred_lft forever
>> >>> 19: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
>> >>> group default qlen 1000
>> >>> link/ether 8e:5c:2e:87:fa:0b brd ff:ff:ff:ff:ff:ff
>> >>>
>> >>> # show example bridge configuration - you're looking for the STP knob 
>> >>> here.
>> >>> [root@vdsm ~]$ cat bridged_net_with_stp
>> >>> {
>> >>>   "bondings": {},
>> >>>   "networks": {
>> >>> "test-network": {
>> >>>   "nic": "eth0",
>> >>>   "switch": "legacy",
>> >>>   "bridged": true,
>> >>>   "stp": true
>> >>> }
>> >>>   },
>> >>>   "options": {
>> >>> "connectivityCheck": false
>> >>>   }
>> >>> }
>> >>>
>> >>> # issue setup networks command:
>> >>> [root@vdsm ~]$ vdsm-client -f bridged_net_with_stp Host setupNetworks
>> >>> {
>> >>> "code": 0,
>> >>> "message": "Done"
>> >>> }
>> >>>
>> >>> # show bridges
>> >>> [root@vdsm ~]$ brctl show
>> >>> bridge name bridge id STP enabled interfaces
>> >>> ;vdsmdummy; 8000. no
>> >>> test-network 8000.52540041fb37 yes eth0
>> >>>
>> >>> # show final state
>> >>> [root@vdsm ~]$ ip a
>> >>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>> >>> group default qlen 1000
>> >>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> >>> inet 127.0.0.1/8 scope host lo
>> >>>valid_lft forever preferred_lft forever
>> >>> 2: eth0:  mtu 1500 qdisc pfifo_fast
>> >>> master test-network state UP group default qlen 1000
>> >>> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
>> >>> 3: eth1:  mtu 1500 qdisc pfifo_fast
>> >>> state UP group default qlen 1000
>> >>> link/ether 

[ovirt-users] Re: large size OVA import fails

2019-08-22 Thread Liran Rotenberg
There is a default timeout for ansible, which is 30 minutes.
You may refer to:
https://bugzilla.redhat.com/show_bug.cgi?id=1528960
https://github.com/oVirt/ovirt-engine/blob/master/packaging/services/ovirt-engine/ovirt-engine.conf.in

"
# Specify the ansible-playbook command execution timeout in minutes.
It's used for any task, which executes
# AnsibleExecutor class. To change the value permanentaly create a
conf file 99-ansible-playbook-timeout.conf in
# /etc/ovirt-engine/engine.conf.d/
ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=30
"

Please try to create 99-ansible-playbook-timeout.conf in
/etc/ovirt-engine/engine.conf.d/ and set
ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=

Regards,
Liran.


On Thu, Aug 22, 2019 at 4:06 PM  wrote:
>
> I have a fairly large OVA (~200GB) that was exported from oVirt4.3.5. I'm 
> trying to import it into a new cluster, also oVirt4.3.5. The import starts 
> fine but fails again and again.
> Everything I can find online appears to be outdated, mentioning incorrect log 
> file locations and saying virt-v2v does the import.
>
> On the engine in /var/log/ovirt-engine/engine.log I can see where it is doing 
> the CreateImageVDSCommand, then a few outputs concerning adding the disk, 
> which end with USER_ADD_DISK_TO_VM_FINISHED_SUCCESS, then the ansible command:
>
> 2019-08-20 15:40:38,653-04
> Executing Ansible command:  /usr/bin/ansible-playbook --ssh-common-args=-F 
> /var/lib/ovirt-engine/.ssh/config -v 
> --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa 
> --inventory=/tmp/ansible-inventory8416464991088315694 
> --extra-vars=ovirt_import_ova_path="/mnt/vm_backups/myvm.ova" 
> --extra-vars=ovirt_import_ova_disks="['/rhev/data-center/mnt/glusterSD/myhost.mydomain.com:_vmstore/59502c8b-fd1e-482b-bff7-39c699c196b3/images/886a3313-19a9-435d-aeac-64c2d507bb54/465ce2ba-8883-4378-bae7-e231047ea09d']"
>  --extra-vars=ovirt_import_ova_image_mappings="{}" 
> /usr/share/ovirt-engine/playbooks/ovirt-ova-import.yml [Logfile: 
> /var/log/ovirt-engine/ova/ovirt-import-ova-ansible-20190820154038-myhost.mydomain.com-25f6ac6f-9bdc-4301-b896-d357712dbf01.log]
>
> Then nothing about the import until:
> 2019-08-20 16:11:08,859-04 INFO  
> [org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-88) [3321d4f6] Lock freed to 
> object 'EngineLock:{exclusiveLocks='[myvm=VM_NAME, 
> 464a25ba-8f0a-421d-a6ab-13eff67b4c96=VM]', sharedLocks=''}'
> 2019-08-20 16:11:08,894-04 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-88) [3321d4f6] EVENT_ID: 
> IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm myvm to Data Center 
> Default, Cluster Default
>
>
> I've found the import logs on the engine, in /var/log/ovirt-engine/ova, but 
> the ovirt-import-ova-ansible*.logs for the imports of concern only contain:
>
> 2019-08-20 19:59:48,799 p=44701 u=ovirt |  Using 
> /usr/share/ovirt-engine/playbooks/ansible.cfg as config file
> 2019-08-20 19:59:49,271 p=44701 u=ovirt |  PLAY [all] 
> *
> 2019-08-20 19:59:49,280 p=44701 u=ovirt |  TASK [ovirt-ova-extract : Run 
> extraction script] ***
>
> Watching the host selected for the import, I can see the qemu-img convert 
> process running, but then the engine frees the lock on the VM and reports the 
> import as having failed. However, the qemu-img process continues to run on 
> the host. I don't know where else to look to try and find out what's going on 
> and I cannot see anything that says why the import failed.
> Since the qemu-img process on the host is still running after the engine log 
> shows the lock has been freed and import failed, I'm guessing what's 
> happening is on the engine side.
>
> Looking at the time between the start of the ansible command and when the 
> lock is freed it is consistently around 30 minutes.
>
> # first try
> 2019-08-20 15:40:38,653-04 ansible command start
> 2019-08-20 16:11:08,859-04 lock freed
>
> 31 minutes
>
> # second try
> 2019-08-20 19:59:48,463-04 ansible command start
> 2019-08-20 20:30:21,697-04 lock freed
>
> 30 minutes, 33 seconds
>
> # third try
> 2019-08-21 09:16:42,706-04 ansible command start
> 2019-08-21 09:46:47,103-04 lock freed
>
> 30 minutes, 45 seconds
>
> With that in mind, I took a look at the available configuration keys from 
> engine-config --list. After getting each the only one set to ~30 minutes and 
> looks like it could be the problem was SSHInactivityHardTimeoutSeconds (set 
> to 1800 seconds). I set it to 3600 and tried the import again, but it still 
> failed at ~30 minutes, so that's apparently not the correct key.
>
>
> Also, just FYI, I also tried to import the ova using virt-v2v, but that fails 
> immediately:
>
> virt-v2v: error: expecting XML expression to return an integer (expression:
> rasd:Parent/text(), matching string: 

[ovirt-users] Re: ovn networking

2019-08-22 Thread Miguel Duarte de Mora Barroso
On Thu, Aug 22, 2019 at 4:53 PM Gianluca Cecchi
 wrote:
>
>
>
> On Tue, Aug 20, 2019 at 12:01 PM Miguel Duarte de Mora Barroso 
>  wrote:
>>
>> On Mon, Aug 19, 2019 at 12:47 PM Staniforth, Paul
>>  wrote:
>> >
>> > Thanks Miguel,
>> >  I'm using 
>> > openvswitch-ovn-common-2.11.0-4.el7.x86_64
>>
>> I hoped this would be gone in version 2.11, but, seems like the
>> openvswitch build hasn't been updated for quite some time
>> (2019-03-20).
>>
>> Unfortunately I cannot provide an ETA on a newer OVS build.
>>
>
> I've just updated my ovirt engine server from 4.3.4 to 4.3.5 
> (ovirt-engine-4.3.5.5-1.el7.noarch) and then also "yum update" and reboot.
> Actually I get those messages as soon as I login to the system, not only 
> running commands:
>
> from my laptop
>  $ ssh ovmgr1
> Last login: Thu Aug 22 16:45:09 2019 from 10.4.23.18
> net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared 
> object file: No such file or directory
> net_mlx5: cannot initialize PMD due to missing run-time dependency on 
> rdma-core libraries (libibverbs, libmlx5)
> PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared 
> object file: No such file or directory
> PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on 
> rdma-core libraries (libibverbs, libmlx4)
> net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared 
> object file: No such file or directory
> net_mlx5: cannot initialize PMD due to missing run-time dependency on 
> rdma-core libraries (libibverbs, libmlx5)
> PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared 
> object file: No such file or directory
> PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on 
> rdma-core libraries (libibverbs, libmlx4)
> [g.cecchi@ovmgr1 ~]$
>
> Any way to fix?

You can get rid of those by installing the dependency libibverbs.
Other than that, none that I'm aware.

> Thanks,
> Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NFMCO7IU4RT3R7QK2ZRVDBULBNI3ZPMC/


[ovirt-users] Re: Need to enable STP on ovirt bridges

2019-08-22 Thread Dominik Holler
On Thu, Aug 22, 2019 at 3:06 PM Curtis E. Combs Jr. 
wrote:

> Seems like the STP options are so common and necessary that it would
> be a priority over seldom-used bridge_opts. I know what STP is and I'm
> not even a networking guy - never even heard of half of the
> bridge_opts that have switches in the UI.
>
> Anyway. I wanted to try the openvswitches, so I reinstalled all of my
> nodes and used "openvswitch (Technology Preview)" as the engine-setup
> option for the first host. I made a new Cluster for my nodes, added
> them all to the new cluster, created a new "logical network" for the
> internal network and attached it to the internal network ports.
>
> Now, when I go to create a new VM, I don't even have either the
> ovirtmgmt switch OR the internal switch as an option. The drop-down is
> empy as if I don't have any vnic-profiles.
>
>
openvswitch clusters are limited to ovn networks.
You can create one like described in
https://www.ovirt.org/documentation/admin-guide/chap-External_Providers.html#connecting-an-ovn-network-to-a-physical-network



> On Thu, Aug 22, 2019 at 7:34 AM Tony Pearce  wrote:
> >
> > Hi Dominik, would you mind sharing the use case for stp via API Only? I
> am keen to know this.
> > Thanks
> >
> >
> > On Thu., 22 Aug. 2019, 19:24 Dominik Holler,  wrote:
> >>
> >>
> >>
> >> On Thu, Aug 22, 2019 at 1:08 PM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
> >>>
> >>> On Sat, Aug 17, 2019 at 11:27 AM  wrote:
> >>> >
> >>> > Hello. I have been trying to figure out an issue for a very long
> time.
> >>> > That issue relates to the ethernet and 10gb fc links that I have on
> my
> >>> > cluster being disabled any time a migration occurs.
> >>> >
> >>> > I believe this is because I need to have STP turned on in order to
> >>> > participate with the switch. However, there does not seem to be any
> >>> > way to tell oVirt to stop turning it off! Very frustrating.
> >>> >
> >>> > After entering a cronjob that enables stp on all bridges every 1
> >>> > minute, the migration issue disappears
> >>> >
> >>> > Is there any way at all to do without this cronjob and set STP to be
> >>> > ON without having to resort to such a silly solution?
> >>>
> >>> Vdsm exposes a per bridge STP knob that you can use for this. By
> >>> default it is set to false, which is probably why you had to use this
> >>> shenanigan.
> >>>
> >>> You can, for instance:
> >>>
> >>> # show present state
> >>> [vagrant@vdsm ~]$ ip a
> >>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> >>> group default qlen 1000
> >>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >>> inet 127.0.0.1/8 scope host lo
> >>>valid_lft forever preferred_lft forever
> >>> 2: eth0:  mtu 1500 qdisc pfifo_fast
> >>> state UP group default qlen 1000
> >>> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
> >>> 3: eth1:  mtu 1500 qdisc pfifo_fast
> >>> state UP group default qlen 1000
> >>> link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
> >>> inet 192.168.50.50/24 brd 192.168.50.255 scope global
> noprefixroute eth1
> >>>valid_lft forever preferred_lft forever
> >>> inet6 fe80::5054:ff:fe83:5b6f/64 scope link
> >>>valid_lft forever preferred_lft forever
> >>> 19: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
> >>> group default qlen 1000
> >>> link/ether 8e:5c:2e:87:fa:0b brd ff:ff:ff:ff:ff:ff
> >>>
> >>> # show example bridge configuration - you're looking for the STP knob
> here.
> >>> [root@vdsm ~]$ cat bridged_net_with_stp
> >>> {
> >>>   "bondings": {},
> >>>   "networks": {
> >>> "test-network": {
> >>>   "nic": "eth0",
> >>>   "switch": "legacy",
> >>>   "bridged": true,
> >>>   "stp": true
> >>> }
> >>>   },
> >>>   "options": {
> >>> "connectivityCheck": false
> >>>   }
> >>> }
> >>>
> >>> # issue setup networks command:
> >>> [root@vdsm ~]$ vdsm-client -f bridged_net_with_stp Host setupNetworks
> >>> {
> >>> "code": 0,
> >>> "message": "Done"
> >>> }
> >>>
> >>> # show bridges
> >>> [root@vdsm ~]$ brctl show
> >>> bridge name bridge id STP enabled interfaces
> >>> ;vdsmdummy; 8000. no
> >>> test-network 8000.52540041fb37 yes eth0
> >>>
> >>> # show final state
> >>> [root@vdsm ~]$ ip a
> >>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> >>> group default qlen 1000
> >>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >>> inet 127.0.0.1/8 scope host lo
> >>>valid_lft forever preferred_lft forever
> >>> 2: eth0:  mtu 1500 qdisc pfifo_fast
> >>> master test-network state UP group default qlen 1000
> >>> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
> >>> 3: eth1:  mtu 1500 qdisc pfifo_fast
> >>> state UP group default qlen 1000
> >>> link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
> >>> inet 192.168.50.50/24 brd 192.168.50.255 scope global
> noprefixroute eth1
> >>>valid_lft forever preferred_lft forever
> >>> inet6 fe80::5054:ff:fe83:5b6f/64 scope link
> >>>  

[ovirt-users] Re: VM won't migrate back to original node

2019-08-22 Thread Kaustav Majumder
Hi,
For starters can you send the engine and vdsm logs?

On Thu, Aug 22, 2019 at 3:30 PM Alexander Schichow  wrote:

> I set up a high available VM which is using a shareable LUN. When
> migrating said VM from the original Node everything is fine. When I try to
> migrate it back, I get an error.
> If I suspend or shutdown the VM, it returns to the original host. Then it
> repeats. I'm really out of ideas so maybe any of you happen to have any? I
> would really appreciate that.
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AOKI2O4JC6PI5Q4VZKV657VHDRESIV7R/
>


-- 

Thanks,

Kaustav Majumder
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T5ED5I4JIFJREHKNWITGKH44WA5SFO7I/


[ovirt-users] Re: ovn networking

2019-08-22 Thread Gianluca Cecchi
On Tue, Aug 20, 2019 at 12:01 PM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

> On Mon, Aug 19, 2019 at 12:47 PM Staniforth, Paul
>  wrote:
> >
> > Thanks Miguel,
> >  I'm using
> openvswitch-ovn-common-2.11.0-4.el7.x86_64
>
> I hoped this would be gone in version 2.11, but, seems like the
> openvswitch build hasn't been updated for quite some time
> (2019-03-20).
>
> Unfortunately I cannot provide an ETA on a newer OVS build.
>
>
I've just updated my ovirt engine server from 4.3.4 to 4.3.5
(ovirt-engine-4.3.5.5-1.el7.noarch) and then also "yum update" and reboot.
Actually I get those messages as soon as I login to the system, not only
running commands:

from my laptop
 $ ssh ovmgr1
Last login: Thu Aug 22 16:45:09 2019 from 10.4.23.18
net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared
object file: No such file or directory
net_mlx5: cannot initialize PMD due to missing run-time dependency on
rdma-core libraries (libibverbs, libmlx5)
PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open
shared object file: No such file or directory
PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on
rdma-core libraries (libibverbs, libmlx4)
net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared
object file: No such file or directory
net_mlx5: cannot initialize PMD due to missing run-time dependency on
rdma-core libraries (libibverbs, libmlx5)
PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open
shared object file: No such file or directory
PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on
rdma-core libraries (libibverbs, libmlx4)
[g.cecchi@ovmgr1 ~]$

Any way to fix?
Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/72V7XVQKI7ZD3JB3TH7FV42KPSGSW75Q/


[ovirt-users] Re: Need to enable STP on ovirt bridges

2019-08-22 Thread Curtis E. Combs Jr.
Seems like the STP options are so common and necessary that it would
be a priority over seldom-used bridge_opts. I know what STP is and I'm
not even a networking guy - never even heard of half of the
bridge_opts that have switches in the UI.

Anyway. I wanted to try the openvswitches, so I reinstalled all of my
nodes and used "openvswitch (Technology Preview)" as the engine-setup
option for the first host. I made a new Cluster for my nodes, added
them all to the new cluster, created a new "logical network" for the
internal network and attached it to the internal network ports.

Now, when I go to create a new VM, I don't even have either the
ovirtmgmt switch OR the internal switch as an option. The drop-down is
empy as if I don't have any vnic-profiles.

On Thu, Aug 22, 2019 at 7:34 AM Tony Pearce  wrote:
>
> Hi Dominik, would you mind sharing the use case for stp via API Only? I am 
> keen to know this.
> Thanks
>
>
> On Thu., 22 Aug. 2019, 19:24 Dominik Holler,  wrote:
>>
>>
>>
>> On Thu, Aug 22, 2019 at 1:08 PM Miguel Duarte de Mora Barroso 
>>  wrote:
>>>
>>> On Sat, Aug 17, 2019 at 11:27 AM  wrote:
>>> >
>>> > Hello. I have been trying to figure out an issue for a very long time.
>>> > That issue relates to the ethernet and 10gb fc links that I have on my
>>> > cluster being disabled any time a migration occurs.
>>> >
>>> > I believe this is because I need to have STP turned on in order to
>>> > participate with the switch. However, there does not seem to be any
>>> > way to tell oVirt to stop turning it off! Very frustrating.
>>> >
>>> > After entering a cronjob that enables stp on all bridges every 1
>>> > minute, the migration issue disappears
>>> >
>>> > Is there any way at all to do without this cronjob and set STP to be
>>> > ON without having to resort to such a silly solution?
>>>
>>> Vdsm exposes a per bridge STP knob that you can use for this. By
>>> default it is set to false, which is probably why you had to use this
>>> shenanigan.
>>>
>>> You can, for instance:
>>>
>>> # show present state
>>> [vagrant@vdsm ~]$ ip a
>>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>>> group default qlen 1000
>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>> inet 127.0.0.1/8 scope host lo
>>>valid_lft forever preferred_lft forever
>>> 2: eth0:  mtu 1500 qdisc pfifo_fast
>>> state UP group default qlen 1000
>>> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
>>> 3: eth1:  mtu 1500 qdisc pfifo_fast
>>> state UP group default qlen 1000
>>> link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
>>> inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute eth1
>>>valid_lft forever preferred_lft forever
>>> inet6 fe80::5054:ff:fe83:5b6f/64 scope link
>>>valid_lft forever preferred_lft forever
>>> 19: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
>>> group default qlen 1000
>>> link/ether 8e:5c:2e:87:fa:0b brd ff:ff:ff:ff:ff:ff
>>>
>>> # show example bridge configuration - you're looking for the STP knob here.
>>> [root@vdsm ~]$ cat bridged_net_with_stp
>>> {
>>>   "bondings": {},
>>>   "networks": {
>>> "test-network": {
>>>   "nic": "eth0",
>>>   "switch": "legacy",
>>>   "bridged": true,
>>>   "stp": true
>>> }
>>>   },
>>>   "options": {
>>> "connectivityCheck": false
>>>   }
>>> }
>>>
>>> # issue setup networks command:
>>> [root@vdsm ~]$ vdsm-client -f bridged_net_with_stp Host setupNetworks
>>> {
>>> "code": 0,
>>> "message": "Done"
>>> }
>>>
>>> # show bridges
>>> [root@vdsm ~]$ brctl show
>>> bridge name bridge id STP enabled interfaces
>>> ;vdsmdummy; 8000. no
>>> test-network 8000.52540041fb37 yes eth0
>>>
>>> # show final state
>>> [root@vdsm ~]$ ip a
>>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>>> group default qlen 1000
>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>> inet 127.0.0.1/8 scope host lo
>>>valid_lft forever preferred_lft forever
>>> 2: eth0:  mtu 1500 qdisc pfifo_fast
>>> master test-network state UP group default qlen 1000
>>> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
>>> 3: eth1:  mtu 1500 qdisc pfifo_fast
>>> state UP group default qlen 1000
>>> link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
>>> inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute eth1
>>>valid_lft forever preferred_lft forever
>>> inet6 fe80::5054:ff:fe83:5b6f/64 scope link
>>>valid_lft forever preferred_lft forever
>>> 19: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
>>> group default qlen 1000
>>> link/ether 8e:5c:2e:87:fa:0b brd ff:ff:ff:ff:ff:ff
>>> 432: test-network:  mtu 1500 qdisc
>>> noqueue state UP group default qlen 1000
>>> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
>>>
>>> I don't think this STP parameter is exposed via engine UI; @Dominik
>>> Holler , could you confirm ? What are our plans for it ?
>>>
>>
>> STP is only available via REST-API, see
>> 

[ovirt-users] large size OVA import fails

2019-08-22 Thread jason . l . cox
I have a fairly large OVA (~200GB) that was exported from oVirt4.3.5. I'm 
trying to import it into a new cluster, also oVirt4.3.5. The import starts fine 
but fails again and again.
Everything I can find online appears to be outdated, mentioning incorrect log 
file locations and saying virt-v2v does the import.

On the engine in /var/log/ovirt-engine/engine.log I can see where it is doing 
the CreateImageVDSCommand, then a few outputs concerning adding the disk, which 
end with USER_ADD_DISK_TO_VM_FINISHED_SUCCESS, then the ansible command:

2019-08-20 15:40:38,653-04
Executing Ansible command:  /usr/bin/ansible-playbook --ssh-common-args=-F 
/var/lib/ovirt-engine/.ssh/config -v 
--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa 
--inventory=/tmp/ansible-inventory8416464991088315694 
--extra-vars=ovirt_import_ova_path="/mnt/vm_backups/myvm.ova" 
--extra-vars=ovirt_import_ova_disks="['/rhev/data-center/mnt/glusterSD/myhost.mydomain.com:_vmstore/59502c8b-fd1e-482b-bff7-39c699c196b3/images/886a3313-19a9-435d-aeac-64c2d507bb54/465ce2ba-8883-4378-bae7-e231047ea09d']"
 --extra-vars=ovirt_import_ova_image_mappings="{}" 
/usr/share/ovirt-engine/playbooks/ovirt-ova-import.yml [Logfile: 
/var/log/ovirt-engine/ova/ovirt-import-ova-ansible-20190820154038-myhost.mydomain.com-25f6ac6f-9bdc-4301-b896-d357712dbf01.log]

Then nothing about the import until:
2019-08-20 16:11:08,859-04 INFO  
[org.ovirt.engine.core.bll.exportimport.ImportVmFromOvaCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-88) [3321d4f6] Lock freed to 
object 'EngineLock:{exclusiveLocks='[myvm=VM_NAME, 
464a25ba-8f0a-421d-a6ab-13eff67b4c96=VM]', sharedLocks=''}'
2019-08-20 16:11:08,894-04 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-88) [3321d4f6] EVENT_ID: 
IMPORTEXPORT_IMPORT_VM_FAILED(1,153), Failed to import Vm myvm to Data Center 
Default, Cluster Default


I've found the import logs on the engine, in /var/log/ovirt-engine/ova, but the 
ovirt-import-ova-ansible*.logs for the imports of concern only contain:

2019-08-20 19:59:48,799 p=44701 u=ovirt |  Using 
/usr/share/ovirt-engine/playbooks/ansible.cfg as config file
2019-08-20 19:59:49,271 p=44701 u=ovirt |  PLAY [all] 
*
2019-08-20 19:59:49,280 p=44701 u=ovirt |  TASK [ovirt-ova-extract : Run 
extraction script] ***

Watching the host selected for the import, I can see the qemu-img convert 
process running, but then the engine frees the lock on the VM and reports the 
import as having failed. However, the qemu-img process continues to run on the 
host. I don't know where else to look to try and find out what's going on and I 
cannot see anything that says why the import failed. 
Since the qemu-img process on the host is still running after the engine log 
shows the lock has been freed and import failed, I'm guessing what's happening 
is on the engine side.

Looking at the time between the start of the ansible command and when the lock 
is freed it is consistently around 30 minutes.

# first try
2019-08-20 15:40:38,653-04 ansible command start
2019-08-20 16:11:08,859-04 lock freed

31 minutes

# second try
2019-08-20 19:59:48,463-04 ansible command start
2019-08-20 20:30:21,697-04 lock freed

30 minutes, 33 seconds

# third try
2019-08-21 09:16:42,706-04 ansible command start
2019-08-21 09:46:47,103-04 lock freed

30 minutes, 45 seconds

With that in mind, I took a look at the available configuration keys from 
engine-config --list. After getting each the only one set to ~30 minutes and 
looks like it could be the problem was SSHInactivityHardTimeoutSeconds (set to 
1800 seconds). I set it to 3600 and tried the import again, but it still failed 
at ~30 minutes, so that's apparently not the correct key.


Also, just FYI, I also tried to import the ova using virt-v2v, but that fails 
immediately:

virt-v2v: error: expecting XML expression to return an integer (expression: 
rasd:Parent/text(), matching string: ----)

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


Does virt-v2v not support OVAs created by the oVirt 'export to ova' option?

So my main question is, Is there a timeout for VM imports through the engine 
web UI?
And if so, is if configurable?



Thanks in advance.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7RSIQVAM3FQHCM6AS5MGAMZDBDQBWYIJ/


[ovirt-users] Re: Custum driver in oVirt node image?

2019-08-22 Thread thomas
Thanks Paul, what a life-saver: I never expected something like that would 
exist, so I didn't even look for it.

In the mean-time I went for option three using the 2.5Gbit NIC, because I 
managed to find out why my CentOS nodes failed: Adding Cinnamon breaks oVirt 
for some very strange reason.

I'd still like to know how to include a driver into an oVirt node image, but 
with two of three options working I feel very relaxed and pleased.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K2LYQLRMJG74QP5FVNTMU6ZGAFDDQRX4/


[ovirt-users] Re: Health Warning: Cinnamon toxic when adding EL7 nodes as oVirt hosts

2019-08-22 Thread thomas
Hi Didi, thanks for the attention!

I created a clean slate test using four identical (Atom J5005 32GB RAM, 1TB 
SSD) boxes and virgin SSDs.

Three-node hosted-engine using oVirt node 4.3.5 (August 5th) image.

Tested adding an n+1 compute node with the same current oVirt image, which 
worked fine (after a long serious of trouble with existing CentOS machines).

Since EL7 seems to have equal standing as oVirt nodes, I couldn't quite believe 
that it fails always. I tried to isolate if there was something specific in my 
way of setting up the CentOS nodes that caused the failure.

So I started with a clean sheet CentOS 7 an another clean SSD.

CentOS 1810 ISOs, developer workstation install variant, storage layout 
identical to the oVirt recommended, updated to the latest patches.

Added oVirt repo, cockpit oVirt dashboard, ssl key etc. with all the proper 
reboots

Test 1: Add 4th node as host to three-node cluster. Works like a charm, migrate 
a running VM back and forth, activate maintenance and remove 4th host

Experiment 1: Add epel "yum install epe-release", yum install yumex, pick 
Cinnamon in Yumex (or basically yum groupinstall cinnamon)

Test 2: Try to add 4th node again: Immediate failure with that message in the 
management engine's deploy log.

Experiment 2: yum erase group cinnamon; yum autoremove; reboot

Test 3: Add 4th node again: Success! (Adding Cinnamon afterwards, no problem, 
re-install seems likely to fail)

Yes, I saw that bug report. Noticed it was REHL8 and... wasn't sure it would be 
related.

And, like you, I can't really fathom why this would happen: I compared packet 
versions on both sides, oVirt nodes and CentOS for everyone involved and they 
seemed identical for everything.

I had installed the yum priorities plugin and made sure that the "full" epel 
repository had prio 99 while the oVirt repos had 1 to ensure that oVirt would 
win out in case of any potential version conflict...

As I understand it, this log on the engine lists actions performed via an ssh 
connection on the to-be host, and that OTOPI might actually be gathering 
packages through some time of y miniyum to then deploy on the to-be host in the 
next step.

So I don't know if the Python context of this is the normal Python 2 context of 
the to-be host, or a temporary one that was actually created on-the-fly etc. 
Somehow I never found the 200 page oVirt insider bible .-)

If you can help me with some context, perhaps I can dig deeper and help filing 
a better bug report. At the moment it seems so odd and vage, I didn't feel 
comfortable.

Can't see any attach buttons, so I guess I may have to fiddle with my browser's 
privacy settings..
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/54X3236S4PFQTIA6Z6O4ZXYA25AY3JMY/


[ovirt-users] Re: Need to enable STP on ovirt bridges

2019-08-22 Thread Tony Pearce
Hi Dominik, would you mind sharing the use case for stp via API Only? I am
keen to know this.
Thanks


On Thu., 22 Aug. 2019, 19:24 Dominik Holler,  wrote:

>
>
> On Thu, Aug 22, 2019 at 1:08 PM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
>
>> On Sat, Aug 17, 2019 at 11:27 AM  wrote:
>> >
>> > Hello. I have been trying to figure out an issue for a very long time.
>> > That issue relates to the ethernet and 10gb fc links that I have on my
>> > cluster being disabled any time a migration occurs.
>> >
>> > I believe this is because I need to have STP turned on in order to
>> > participate with the switch. However, there does not seem to be any
>> > way to tell oVirt to stop turning it off! Very frustrating.
>> >
>> > After entering a cronjob that enables stp on all bridges every 1
>> > minute, the migration issue disappears
>> >
>> > Is there any way at all to do without this cronjob and set STP to be
>> > ON without having to resort to such a silly solution?
>>
>> Vdsm exposes a per bridge STP knob that you can use for this. By
>> default it is set to false, which is probably why you had to use this
>> shenanigan.
>>
>> You can, for instance:
>>
>> # show present state
>> [vagrant@vdsm ~]$ ip a
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>> group default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> 2: eth0:  mtu 1500 qdisc pfifo_fast
>> state UP group default qlen 1000
>> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
>> 3: eth1:  mtu 1500 qdisc pfifo_fast
>> state UP group default qlen 1000
>> link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
>> inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute
>> eth1
>>valid_lft forever preferred_lft forever
>> inet6 fe80::5054:ff:fe83:5b6f/64 scope link
>>valid_lft forever preferred_lft forever
>> 19: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
>> group default qlen 1000
>> link/ether 8e:5c:2e:87:fa:0b brd ff:ff:ff:ff:ff:ff
>>
>> # show example bridge configuration - you're looking for the STP knob
>> here.
>> [root@vdsm ~]$ cat bridged_net_with_stp
>> {
>>   "bondings": {},
>>   "networks": {
>> "test-network": {
>>   "nic": "eth0",
>>   "switch": "legacy",
>>   "bridged": true,
>>   "stp": true
>> }
>>   },
>>   "options": {
>> "connectivityCheck": false
>>   }
>> }
>>
>> # issue setup networks command:
>> [root@vdsm ~]$ vdsm-client -f bridged_net_with_stp Host setupNetworks
>> {
>> "code": 0,
>> "message": "Done"
>> }
>>
>> # show bridges
>> [root@vdsm ~]$ brctl show
>> bridge name bridge id STP enabled interfaces
>> ;vdsmdummy; 8000. no
>> test-network 8000.52540041fb37 yes eth0
>>
>> # show final state
>> [root@vdsm ~]$ ip a
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>> group default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> 2: eth0:  mtu 1500 qdisc pfifo_fast
>> master test-network state UP group default qlen 1000
>> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
>> 3: eth1:  mtu 1500 qdisc pfifo_fast
>> state UP group default qlen 1000
>> link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
>> inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute
>> eth1
>>valid_lft forever preferred_lft forever
>> inet6 fe80::5054:ff:fe83:5b6f/64 scope link
>>valid_lft forever preferred_lft forever
>> 19: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
>> group default qlen 1000
>> link/ether 8e:5c:2e:87:fa:0b brd ff:ff:ff:ff:ff:ff
>> 432: test-network:  mtu 1500 qdisc
>> noqueue state UP group default qlen 1000
>> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
>>
>> I don't think this STP parameter is exposed via engine UI; @Dominik
>> Holler , could you confirm ? What are our plans for it ?
>>
>>
> STP is only available via REST-API, see
> http://ovirt.github.io/ovirt-engine-api-model/4.3/#types/network
> please find an example how to enable STP in
> https://gist.github.com/dominikholler/4e70c9ef9929d93b6807f56d43a70b95
>
> We have no plans to add STP to the web ui,
> but new feature requests are always welcome on
> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
>
>
>
>> >
>> > Here are some details about my systems, if you need it.
>> >
>> >
>> > selinux is disabled.
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > [root@swm-02 ~]# rpm -qa | grep ovirt
>> > ovirt-imageio-common-1.5.1-0.el7.x86_64
>> > ovirt-release43-4.3.5.2-1.el7.noarch
>> > ovirt-imageio-daemon-1.5.1-0.el7.noarch
>> > ovirt-vmconsole-host-1.0.7-2.el7.noarch
>> > ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
>> > ovirt-ansible-hosted-engine-setup-1.0.26-1.el7.noarch
>> > python2-ovirt-host-deploy-1.8.0-1.el7.noarch
>> > 

[ovirt-users] Re: Need to enable STP on ovirt bridges

2019-08-22 Thread Dominik Holler
On Thu, Aug 22, 2019 at 1:08 PM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

> On Sat, Aug 17, 2019 at 11:27 AM  wrote:
> >
> > Hello. I have been trying to figure out an issue for a very long time.
> > That issue relates to the ethernet and 10gb fc links that I have on my
> > cluster being disabled any time a migration occurs.
> >
> > I believe this is because I need to have STP turned on in order to
> > participate with the switch. However, there does not seem to be any
> > way to tell oVirt to stop turning it off! Very frustrating.
> >
> > After entering a cronjob that enables stp on all bridges every 1
> > minute, the migration issue disappears
> >
> > Is there any way at all to do without this cronjob and set STP to be
> > ON without having to resort to such a silly solution?
>
> Vdsm exposes a per bridge STP knob that you can use for this. By
> default it is set to false, which is probably why you had to use this
> shenanigan.
>
> You can, for instance:
>
> # show present state
> [vagrant@vdsm ~]$ ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> 2: eth0:  mtu 1500 qdisc pfifo_fast
> state UP group default qlen 1000
> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
> 3: eth1:  mtu 1500 qdisc pfifo_fast
> state UP group default qlen 1000
> link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
> inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute
> eth1
>valid_lft forever preferred_lft forever
> inet6 fe80::5054:ff:fe83:5b6f/64 scope link
>valid_lft forever preferred_lft forever
> 19: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
> group default qlen 1000
> link/ether 8e:5c:2e:87:fa:0b brd ff:ff:ff:ff:ff:ff
>
> # show example bridge configuration - you're looking for the STP knob here.
> [root@vdsm ~]$ cat bridged_net_with_stp
> {
>   "bondings": {},
>   "networks": {
> "test-network": {
>   "nic": "eth0",
>   "switch": "legacy",
>   "bridged": true,
>   "stp": true
> }
>   },
>   "options": {
> "connectivityCheck": false
>   }
> }
>
> # issue setup networks command:
> [root@vdsm ~]$ vdsm-client -f bridged_net_with_stp Host setupNetworks
> {
> "code": 0,
> "message": "Done"
> }
>
> # show bridges
> [root@vdsm ~]$ brctl show
> bridge name bridge id STP enabled interfaces
> ;vdsmdummy; 8000. no
> test-network 8000.52540041fb37 yes eth0
>
> # show final state
> [root@vdsm ~]$ ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> 2: eth0:  mtu 1500 qdisc pfifo_fast
> master test-network state UP group default qlen 1000
> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
> 3: eth1:  mtu 1500 qdisc pfifo_fast
> state UP group default qlen 1000
> link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
> inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute
> eth1
>valid_lft forever preferred_lft forever
> inet6 fe80::5054:ff:fe83:5b6f/64 scope link
>valid_lft forever preferred_lft forever
> 19: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
> group default qlen 1000
> link/ether 8e:5c:2e:87:fa:0b brd ff:ff:ff:ff:ff:ff
> 432: test-network:  mtu 1500 qdisc
> noqueue state UP group default qlen 1000
> link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
>
> I don't think this STP parameter is exposed via engine UI; @Dominik
> Holler , could you confirm ? What are our plans for it ?
>
>
STP is only available via REST-API, see
http://ovirt.github.io/ovirt-engine-api-model/4.3/#types/network
please find an example how to enable STP in
https://gist.github.com/dominikholler/4e70c9ef9929d93b6807f56d43a70b95

We have no plans to add STP to the web ui,
but new feature requests are always welcome on
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine



> >
> > Here are some details about my systems, if you need it.
> >
> >
> > selinux is disabled.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > [root@swm-02 ~]# rpm -qa | grep ovirt
> > ovirt-imageio-common-1.5.1-0.el7.x86_64
> > ovirt-release43-4.3.5.2-1.el7.noarch
> > ovirt-imageio-daemon-1.5.1-0.el7.noarch
> > ovirt-vmconsole-host-1.0.7-2.el7.noarch
> > ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
> > ovirt-ansible-hosted-engine-setup-1.0.26-1.el7.noarch
> > python2-ovirt-host-deploy-1.8.0-1.el7.noarch
> > ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
> > python2-ovirt-setup-lib-1.2.0-1.el7.noarch
> > cockpit-machines-ovirt-195.1-1.el7.noarch
> > ovirt-hosted-engine-ha-2.3.3-1.el7.noarch
> > ovirt-vmconsole-1.0.7-2.el7.noarch
> > cockpit-ovirt-dashboard-0.13.5-1.el7.noarch
> > ovirt-provider-ovn-driver-1.2.22-1.el7.noarch
> > 

[ovirt-users] Re: attach untagged vlan internally on vm

2019-08-22 Thread Miguel Duarte de Mora Barroso
On Wed, Aug 21, 2019 at 9:18 AM  wrote:
>
> good day
> currently i am testing oVirt on a single box and setup some tagged vms and 
> non tagged vm.
> the non tagged vm is a firewall but it has limitations on the number of nic 
> so i cannot attach tagged vnic and wish to handdle vlan tagging on it
>
> is it possible to pass untaged franes internally?

I think it would fallback to the linux bridge default configuration,
which internally tags untagged frames with vlanID 1, and untags them
when exiting the port. Unless I'm wrong (for instance, we change the
bridge defaults), this means you can pass untagged frames through the
bridge.

Adding Edward, to keep me honest.




> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HYFSLS5QM5DKBYWFF44NCB4E3CD5GKH4/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ME77W5PLKOQC5U3OXNZE3W7W27ZOPVIP/


[ovirt-users] Re: Need to enable STP on ovirt bridges

2019-08-22 Thread Miguel Duarte de Mora Barroso
On Sat, Aug 17, 2019 at 11:27 AM  wrote:
>
> Hello. I have been trying to figure out an issue for a very long time.
> That issue relates to the ethernet and 10gb fc links that I have on my
> cluster being disabled any time a migration occurs.
>
> I believe this is because I need to have STP turned on in order to
> participate with the switch. However, there does not seem to be any
> way to tell oVirt to stop turning it off! Very frustrating.
>
> After entering a cronjob that enables stp on all bridges every 1
> minute, the migration issue disappears
>
> Is there any way at all to do without this cronjob and set STP to be
> ON without having to resort to such a silly solution?

Vdsm exposes a per bridge STP knob that you can use for this. By
default it is set to false, which is probably why you had to use this
shenanigan.

You can, for instance:

# show present state
[vagrant@vdsm ~]$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast
state UP group default qlen 1000
link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
3: eth1:  mtu 1500 qdisc pfifo_fast
state UP group default qlen 1000
link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute eth1
   valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe83:5b6f/64 scope link
   valid_lft forever preferred_lft forever
19: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
group default qlen 1000
link/ether 8e:5c:2e:87:fa:0b brd ff:ff:ff:ff:ff:ff

# show example bridge configuration - you're looking for the STP knob here.
[root@vdsm ~]$ cat bridged_net_with_stp
{
  "bondings": {},
  "networks": {
"test-network": {
  "nic": "eth0",
  "switch": "legacy",
  "bridged": true,
  "stp": true
}
  },
  "options": {
"connectivityCheck": false
  }
}

# issue setup networks command:
[root@vdsm ~]$ vdsm-client -f bridged_net_with_stp Host setupNetworks
{
"code": 0,
"message": "Done"
}

# show bridges
[root@vdsm ~]$ brctl show
bridge name bridge id STP enabled interfaces
;vdsmdummy; 8000. no
test-network 8000.52540041fb37 yes eth0

# show final state
[root@vdsm ~]$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast
master test-network state UP group default qlen 1000
link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff
3: eth1:  mtu 1500 qdisc pfifo_fast
state UP group default qlen 1000
link/ether 52:54:00:83:5b:6f brd ff:ff:ff:ff:ff:ff
inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute eth1
   valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe83:5b6f/64 scope link
   valid_lft forever preferred_lft forever
19: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
group default qlen 1000
link/ether 8e:5c:2e:87:fa:0b brd ff:ff:ff:ff:ff:ff
432: test-network:  mtu 1500 qdisc
noqueue state UP group default qlen 1000
link/ether 52:54:00:41:fb:37 brd ff:ff:ff:ff:ff:ff

I don't think this STP parameter is exposed via engine UI; @Dominik
Holler , could you confirm ? What are our plans for it ?

>
> Here are some details about my systems, if you need it.
>
>
> selinux is disabled.
>
>
>
>
>
>
>
>
>
> [root@swm-02 ~]# rpm -qa | grep ovirt
> ovirt-imageio-common-1.5.1-0.el7.x86_64
> ovirt-release43-4.3.5.2-1.el7.noarch
> ovirt-imageio-daemon-1.5.1-0.el7.noarch
> ovirt-vmconsole-host-1.0.7-2.el7.noarch
> ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
> ovirt-ansible-hosted-engine-setup-1.0.26-1.el7.noarch
> python2-ovirt-host-deploy-1.8.0-1.el7.noarch
> ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
> python2-ovirt-setup-lib-1.2.0-1.el7.noarch
> cockpit-machines-ovirt-195.1-1.el7.noarch
> ovirt-hosted-engine-ha-2.3.3-1.el7.noarch
> ovirt-vmconsole-1.0.7-2.el7.noarch
> cockpit-ovirt-dashboard-0.13.5-1.el7.noarch
> ovirt-provider-ovn-driver-1.2.22-1.el7.noarch
> ovirt-host-deploy-common-1.8.0-1.el7.noarch
> ovirt-host-4.3.4-1.el7.x86_64
> python-ovirt-engine-sdk4-4.3.2-2.el7.x86_64
> ovirt-host-dependencies-4.3.4-1.el7.x86_64
> ovirt-ansible-repositories-1.1.5-1.el7.noarch
> [root@swm-02 ~]# cat /etc/redhat-release
> CentOS Linux release 7.6.1810 (Core)
> [root@swm-02 ~]# uname -r
> 3.10.0-957.27.2.el7.x86_64
> You have new mail in /var/spool/mail/root
> [root@swm-02 ~]# ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> 2: em1:  mtu 1500 qdisc mq master
> test state UP group default qlen 1000
> link/ether d4:ae:52:8d:50:48 brd 

[ovirt-users] VM won't migrate back to original node

2019-08-22 Thread Alexander Schichow
I set up a high available VM which is using a shareable LUN. When migrating 
said VM from the original Node everything is fine. When I try to migrate it 
back, I get an error. 
If I suspend or shutdown the VM, it returns to the original host. Then it 
repeats. I'm really out of ideas so maybe any of you happen to have any? I 
would really appreciate that.
Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AOKI2O4JC6PI5Q4VZKV657VHDRESIV7R/


[ovirt-users] Re: Health Warning: Cinnamon toxic when adding EL7 nodes as oVirt hosts

2019-08-22 Thread Yedidyah Bar David
On Thu, Aug 22, 2019 at 11:31 AM  wrote:
>
> Of course a desktop on a virtualization host isn't really recommended, but 
> actually, managing highly or even high-available supports services as VMs on 
> a set of fat workstations under the control of oVirt can be rather useful...
>
> And I prefer Cinnamon so much over the other desktops, it gets installed 
> almost first after setup... Which turns out to be a mistake with oVirt.
>
> Anyhow, to make a long journey of hair-pulling root-cause searching short:
> Installing Cinnamon from EPEL has Otopi not find rpmUtils for one reason or 
> another, even if I can see it on both sides, the engine and the target host.

Which OSes on each of them? Which python versions? Other stuff from
EPEL? What version of oVirt?

Having all of EPEL enabled is known to have caused problems in the
past, and is not recommended, nor tested (other than by people like
you :-) ).
You might try to carefully enable only Cinnamon and its dependencies
and see what you can come up with.

>
> Remove Cinnamon, it works again and you can re-install Cinnamon after.
> I didn't test re-deploys, but I'd expect them to fail.
>
> Don't know if this should be a bug report, since Cinnamon very unfortunately 
> doesn't seem official just yet.
>
> This is what you may expect to see in the engine logs:
>
> 2019-08-22 09:49:24,152+0200 DEBUG otopi.context context._executeMethod:145 
> method exception
> Traceback (most recent call last):
>   File "/tmp/ovirt-BAJG2FYLMu/pythonlib/otopi/context.py", line 132, in 
> _executeMethod
> method['method']()
>   File 
> "/tmp/ovirt-BAJG2FYLMu/otopi-plugins/ovirt-host-deploy/kdump/packages.py", 
> line 216, in _customization
> self._kexec_tools_version_supported()
>   File 
> "/tmp/ovirt-BAJG2FYLMu/otopi-plugins/ovirt-host-deploy/kdump/packages.py", 
> line 143, in _kexec_tools_version_supported
> from rpmUtils.miscutils import compareEVR
> ModuleNotFoundError: No module named 'rpmUtils'
> 2019-08-22 09:49:24,152+0200 ERROR otopi.context context._executeMethod:154 
> Failed to execute stage 'Environment customization': No module named 
> 'rpmUtils'

This might be this known bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1724056

Can't tell how it's triggered by adding EPEL.

If you think it's a different bug, by all means - please report it!
Thanks. And please attach relevant info, logs, etc.

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5B72ZDFFKLJMYSDETNQVB4CCQENH3I5L/


[ovirt-users] Health Warning: Cinnamon toxic when adding EL7 nodes as oVirt hosts

2019-08-22 Thread thomas
Of course a desktop on a virtualization host isn't really recommended, but 
actually, managing highly or even high-available supports services as VMs on a 
set of fat workstations under the control of oVirt can be rather useful...

And I prefer Cinnamon so much over the other desktops, it gets installed almost 
first after setup... Which turns out to be a mistake with oVirt.

Anyhow, to make a long journey of hair-pulling root-cause searching short:
Installing Cinnamon from EPEL has Otopi not find rpmUtils for one reason or 
another, even if I can see it on both sides, the engine and the target host. 

Remove Cinnamon, it works again and you can re-install Cinnamon after.
I didn't test re-deploys, but I'd expect them to fail.

Don't know if this should be a bug report, since Cinnamon very unfortunately 
doesn't seem official just yet.

This is what you may expect to see in the engine logs:

2019-08-22 09:49:24,152+0200 DEBUG otopi.context context._executeMethod:145 
method exception
Traceback (most recent call last):
  File "/tmp/ovirt-BAJG2FYLMu/pythonlib/otopi/context.py", line 132, in 
_executeMethod
method['method']()
  File 
"/tmp/ovirt-BAJG2FYLMu/otopi-plugins/ovirt-host-deploy/kdump/packages.py", line 
216, in _customization
self._kexec_tools_version_supported()
  File 
"/tmp/ovirt-BAJG2FYLMu/otopi-plugins/ovirt-host-deploy/kdump/packages.py", line 
143, in _kexec_tools_version_supported
from rpmUtils.miscutils import compareEVR
ModuleNotFoundError: No module named 'rpmUtils'
2019-08-22 09:49:24,152+0200 ERROR otopi.context context._executeMethod:154 
Failed to execute stage 'Environment customization': No module named 'rpmUtils'
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD7B6TGK7DHNNGOFGQ6Y4XVMHSDFDNQL/


[ovirt-users] [ANN] oVirt 4.3.6 Third Release Candidate is now available for testing

2019-08-22 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.6 Third Release Candidate for testing, as of August 22nd, 2019.

This update is a release candidate of the sixth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.

This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) will be made available when
CentOS 7.7 will be released.

See the release notes [1] for known issues, new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Node is not yet available, pending CentOS 7.7 release to be
available

Additional Resources:
* Read more about the oVirt 4.3.6 release highlights:
http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.3.6/

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KXJWKWPQI6RPMXYI6VZFMPWOJZXW6YHL/


[ovirt-users] Re: Where to configure iscsi initiator name?

2019-08-22 Thread Gianluca Cecchi
On Wed, Aug 21, 2019 at 7:07 PM Dan Poltawski 
wrote:

> When I added the first node a 'random' initiator name was generated of
> form:
>
> # cat /etc/iscsi/initiatorname.iscsi
> InitiatorName=iqn.1994-05.com.redhat:[RANDOM]
>
> Having attempted to add another node, this node has another initiator name
> generated and can't access the storage. Is there a way to configure this
> initiator name to a static value which will be configured when new nodes
> get added to the cluster? Or is there some reason for this i'm missing?
>
> thanks,
>
> Dan
>

oVirt nodes are iSCSI clients and each node needs to have a different
InitiatorName value.
Eventually you can modify initiatorname of a node before running discovery
against oVirt and rebooting it (or restarting iscsi services).
You can edit the file or use the iscsi-iname command.
It must be named with an iqn name in the format of
iqn.-MM.reverse.domain.name:OptionalIdentifier
Typically you configure your iSCSI storage array giving access to the lun
to be used as a storage domain to all oVirt hosts initiators. Or you can
use chap authentication or both

Se also here:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/osm-create-iscsi-initiator
https://www.certdepot.net/rhel7-configure-iscsi-target-initiator-persistently/

HIH,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3SL6SWAVCJXKVDJMLOYZ2ZV2SSPVG2EK/


[ovirt-users] Data Center Contending/Non responsive - No SPM

2019-08-22 Thread Matthew B
Hello,

I'm having a problem with oVirt - we're running 4.3 on CentOS 7:

[root@ovirt ~]# rpm -q ovirt-engine; uname -r
ovirt-engine-4.3.4.3-1.el7.noarch
3.10.0-957.21.2.el7.x86_64

Currently the Data Center alternates between Non responsive and contending
status, and SPM selection fails.
The error in the events tab is:

VDSM compute5.domain command HSMGetTaskStatusVDS failed: Volume does not
exist: (u'2bffb8d0-dfb5-4b08-9c6b-716e11f280c2',)

Full error:

2019-08-21 13:53:53,507-0700 ERROR (tasks/8) [storage.TaskManager.Task]
(Task='3076bb8c-7462-4573-a832-337da478ae0e') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336,
in run
return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 333, in
startSpm
self._upgradePool(expectedDomVersion, __securityOverride=True)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
79, in wrapper
return method(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 484, in
_upgradePool
str(targetDomVersion))
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1108, in
_convertDomain
targetFormat)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/formatconverter.py",
line 447, in convert
converter(repoPath, hostId, imageRepo, isMsd)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/formatconverter.py",
line 405, in v5DomainConverter
domain.convert_volumes_metadata(target_version)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 813,
in convert_volumes_metadata
for vol in self.iter_volumes():
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 764, in
iter_volumes
yield self.produceVolume(img_id, vol_id)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 846, in
produceVolume
volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterVolume.py",
line 45, in __init__
volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 817,
in __init__
self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
71, in __init__
volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 86,
in __init__
self.validate()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 112,
in validate
self.validateVolumePath()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
131, in validateVolumePath
raise se.VolumeDoesNotExist(self.volUUID)
VolumeDoesNotExist: Volume does not exist:
(u'2bffb8d0-dfb5-4b08-9c6b-716e11f280c2',)

Most of our VMs are still running, but we can't start or restart any VMs.
All of the domains show as down since SPM selection fails. Any thoughts?

Thanks,
 -Matthew
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TBJTM6WSE274QHNOUW6OY4SOD34GSVYW/