[ovirt-users] ovirt vdsm and networking

2019-12-12 Thread kim . kargaard
Hi,

We are running CentOS and ovirt 4.3.4. We currently have four nodes and have 
set up the networks as follows:

ovirtmanagmeent network - set up as a tagged vlan with a static IP
SAN network - set up as a tagged vlan with a static IP
student network - set up as a tagged vlan
fw network - set up as a tagged vlan

The ovirtmanagement and SAN networks were configured on the CentOS boxes during 
after installing CentOS and before adding the nodes as hosts. The student 
network and the fw network were configured through the ovirt admin panel. 
However, because of this, the student network is not assigned a static IP on 
the nodes and serves the VM's using the DHCP on the firewall. The ovirt 
management network is set as the display network and every time I try to change 
that to the student network, ovirt admin panel tells me that I can't because 
there is no IP address assigned to the student network vlan. I tried using ip 
addr add to add a static IP on the one CentOS node, but this does not get 
picked up by ovirt and of course ovirt controls the actual vlan files on the 
CentOS box through VDSM, so any changes I make there will likely just be 
overwritten. 

So, my question is, should I stop VDSM, add the IP to the vlan files on the 
CentOS nodes and then restart VDSM or is there a proper way of solving this? I 
need the Display Network to be set to the student network. 

Thank you for any help.

Kim 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O5GJRNIPOXJLZ2KEPGZJZZNG57OTTLGI/


[ovirt-users] Re: Ovirt OVN help needed

2019-12-12 Thread Dominik Holler
On Thu, Dec 12, 2019 at 7:50 PM Strahil  wrote:

> Hi Dominik,
>
> Thanks for the reply.
>
> Sadly the openstack module is missing on the engine and I have to figure
> it out.
>

The module can be installed by 'pip install openstacksdk', please find an
example in
https://github.com/oVirt/ovirt-system-tests/blob/master/network-suite-master/control.sh#L20


> Can't I just undeploy the ovn and then redeploy it back ?
>

No idea, I never tried this.
In doubt, you can deleter one entity after another via ovn-nbctl.



> Best Regards,
> Strahil Nikolov
> On Dec 12, 2019 09:32, Dominik Holler  wrote:
>
> The cleanest way to clean up is to remove all entities on the OpenStack
> Network API on ovirt-provider-ovn, e.g. by something like
>
> https://gist.github.com/dominikholler/19bcdc5f14f42ab5f069086fd2ff5e37#file-list_security_groups-py-L25
> This should work, if not, please report a bug.
>
> To bypass the ovirt-provider-ovn, which is not recommended and might end
> in an inconsistent state, you could use ovn-nbctl .
>
>
>
> On Thu, Dec 12, 2019 at 3:33 AM Strahil Nikolov 
> wrote:
>
> Hi Community,
>
> can someone hint me how to get rid of some ports? I just want to 'reset'
> my ovn setup.
>
> Here is what I have so far:
>
> [root@ovirt1 openvswitch]# ovs-vsctl list interface
> _uuid   : be89c214-10e4-4a97-a9eb-1b82bc433a24
> admin_state : up
> bfd : {}
> bfd_status  : {}
> cfm_fault   : []
> cfm_fault_status: []
> cfm_flap_count  : []
> cfm_health  : []
> cfm_mpid: []
> cfm_remote_mpids: []
> cfm_remote_opstate  : []
> duplex  : []
> error   : []
> external_ids: {}
> ifindex : 35
> ingress_policing_burst: 0
> ingress_policing_rate: 0
> lacp_current: []
> link_resets : 0
> link_speed  : []
> link_state  : up
> lldp: {}
> mac : []
> mac_in_use  : "7a:7d:1d:a7:43:1d"
> mtu : []
> mtu_request : []
> name: "ovn-25cc77-0"
> ofport  : 6
> ofport_request  : []
> options : {csum="true", key=flow, remote_ip="192.168.1.64"}
> other_config: {}
> statistics  : {rx_bytes=0, rx_packets=0, tx_bytes=0, tx_packets=0}
> status  : {tunnel_egress_iface=ovirtmgmt,
> tunnel_egress_iface_carrier=up}
> type: geneve
>
> _uuid   : ec6a6688-e5d6-4346-ac47-ece1b8379440
> admin_state : down
> bfd : {}
> bfd_status  : {}
> cfm_fault   : []
> cfm_fault_status: []
> cfm_flap_count  : []
> cfm_health  : []
> cfm_mpid: []
> cfm_remote_mpids: []
> cfm_remote_opstate  : []
> duplex  : []
> error   : []
> external_ids: {}
> ifindex : 13
> ingress_policing_burst: 0
> ingress_policing_rate: 0
> lacp_current: []
> link_resets : 0
> link_speed  : []
> link_state  : down
> lldp: {}
> mac : []
> mac_in_use  : "66:36:dd:63:dc:48"
> mtu : 1500
> mtu_request : []
> name: br-int
> ofport  : 65534
> ofport_request  : []
> options : {}
> other_config: {}
> statistics  : {collisions=0, rx_bytes=0, rx_crc_err=0,
> rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0,
> tx_bytes=0, tx_dropped=0, tx_errors=0, tx_packets=0}
> status  : {driver_name=openvswitch}
> type: internal
>
> _uuid   : 1e511b4d-f7c2-499f-bd8c-07236e7bb7af
> admin_state : up
> bfd : {}
> bfd_status  : {}
> cfm_fault   : []
> cfm_fault_status: []
> cfm_flap_count  : []
> cfm_health  : []
> cfm_mpid: []
> cfm_remote_mpids: []
> cfm_remote_opstate  : []
> duplex  : []
> error   : []
> external_ids: {}
> ifindex : 35
> ingress_policing_burst: 0
> ingress_policing_rate: 0
> lacp_current: []
> link_resets : 0
> link_speed  : []
> link_state  : up
> lldp: {}
> mac : []
> mac_in_use  : "1a:85:d1:d9:e2:a5"
> mtu : []
> mtu_request : []
> name: "ovn-566849-0"
> ofport  : 5
> ofport_request  : []
> options : {csum="true", key=flow, remote_ip="192.168.1.41"}
> other_config: {}
> statistics  : {rx_bytes=0, rx_packets=0, tx_bytes=0, tx_packets=0}
> status  : {tunnel_egress_iface=ovirtmgmt,
> tunnel_egress_iface_carrier=up}
> type: geneve
>
>
> When I try to remove a port - it never ends (just hanging):
>
> [root@ovirt1 openvswitch]# ovs-vsctl --dry-run del-port br-int
> ovn-25cc77-0
> In journal  I see only this:
> дек 12 04:13:57 ovirt1.localdomain ovs-vsctl[22030]:
> 

[ovirt-users] Re: Problems after 4.3.8 update

2019-12-12 Thread Jayme
I believe I was able to get past this by stopping the engine volume then
unmounting the glusterfs engine mount on all hosts and re-starting the
volume. I was able to start hostedengine on host0.

I'm still facing a few problems:

1. I'm still seeing this issue in each host's logs:

Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
Failed scanning for OVF_STORE due to Command Volume.getInfo with args
{'storagepoolID': '----',
'storagedomainID': 'd70b171e-7488-4d52-8cad-bbc581dbf16e', 'volumeID':
u'2632f423-ed89-43d9-93a9-36738420b866', 'imageID':
u'd909dc74-5bbd-4e39-b9b5-755c167a6ee8'} failed:#012(code=201,
message=Volume does not exist: (u'2632f423-ed89-43d9-93a9-36738420b866',))
Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
Please ensure you already added your first data domain for regular VMs


2. Most of my gluster volumes still have un-healed entires which I can't
seem to heal.  I'm not sure what the answer is here.

On Fri, Dec 13, 2019 at 12:33 AM Jayme  wrote:

> I was able to get the hosted engine started manually via Virsh after
> re-creating a missing symlink in /var/run/vdsm/storage -- I later shut it
> down and am still having the same problem with ha broker starting.  It
> appears that the problem *might* be with a corrupt ha metadata file,
> although gluster is not stating there is split brain on the engine volume
>
> I'm seeing this:
>
> ls -al
> /rhev/data-center/mnt/glusterSD/orchard0\:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
> ls: cannot access
> /rhev/data-center/mnt/glusterSD/orchard0:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/hosted-engine.metadata:
> Input/output error
> total 0
> drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:30 .
> drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
> lrwxrwxrwx. 1 vdsm kvm 132 Dec 13 00:30 hosted-engine.lockspace ->
> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
> l?? ? ?? ?? hosted-engine.metadata
>
> Clearly showing some sort of issue with hosted-engine.metadata on the
> client mount.
>
> on each node in /gluster_bricks I see this:
>
> # ls -al
> /gluster_bricks/engine/engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
> total 0
> drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:31 .
> drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
> lrwxrwxrwx. 2 vdsm kvm 132 Dec 13 00:31 hosted-engine.lockspace ->
> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
> lrwxrwxrwx. 2 vdsm kvm 132 Dec 12 16:30 hosted-engine.metadata ->
> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c
>
>  ls -al
> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c
> -rw-rw. 1 vdsm kvm 1073741824 Dec 12 16:48
> /var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c
>
>
> I'm not sure how to proceed at this point.  Do I have data corruption, a
> gluster split-brain issue or something else?  Maybe I just need to
> re-generate metadata for the hosted engine?
>
> On Thu, Dec 12, 2019 at 6:36 PM Jayme  wrote:
>
>> I'm running a three server HCI.  Up and running on 4.3.7 with no
>> problems.  Today I updated to 4.3.8.  Engine upgraded fine, rebooted.
>> First host updated fine, rebooted and let all gluster volumes heal.  Put
>> second host in maintenance, upgraded successfully, rebooted.  Waited for
>> gluster volumes to heal for over an hour but the heal process was not
>> completing.  I tried restarting gluster services as well as the host with
>> no success.
>>
>> I'm in a state right now where there are pending heals on almost all of
>> my volumes.  Nothing is reporting split-brain, but the heals are not
>> completing.
>>
>> All vms are still currently running except hosted engine.  Hosted engine
>> was running but on the 2nd host I upgraded I was seeing errors such as:
>>
>> Dec 12 16:34:39 orchard2 journal: ovirt-ha-agent
>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
>> Failed scanning for OVF_STORE due to Command Volume.getInfo with args
>> {'storagepoolID': '----',
>> 'storagedomainID': 'd70b171e-7488-4d52-8cad-bbc581dbf16e', 'volumeID':
>> u'2632f423-ed89-43d9-93a9-36738420b866', 'imageID':
>> u'd909dc74-5bbd-4e39-b9b5-755c167a6ee8'} failed:#012(code=201,
>> message=Volume does not exist: (u'2632f423-ed89-43d9-93a9-36738420b866',))
>>
>> I shut down the engine VM and attempted a manual heal on the engine
>> volume.  I cannot start the 

[ovirt-users] Re: Ovirt OVN help needed

2019-12-12 Thread Strahil
Hi Dominik, All,

I've checked 
'https://lists.ovirt.org/archives/list/users@ovirt.org/thread/W6U4XJHNMYMD3WIXDCPGOXLW6DFMCYIM/'
 and the user managed to clear up and start over.

I have removed the ovn-external-provider  from UI, but I forgot to copy the 
data from the fields.

Do you know any refference guide (or any tips & tricks) for adding OVN ?

Thanks in advance.


Best Regards,
Strahil NikolovOn Dec 12, 2019 20:49, Strahil  wrote:
>
> Hi Dominik,
>
> Thanks for the reply.
>
> Sadly the openstack module is missing on the engine and I have to figure it 
> out.
>
> Can't I just undeploy the ovn and then redeploy it back ?
>
> Best Regards,
> Strahil Nikolov
>
> On Dec 12, 2019 09:32, Dominik Holler  wrote:
>>
>> The cleanest way to clean up is to remove all entities on the OpenStack 
>> Network API on ovirt-provider-ovn, e.g. by something like
>> https://gist.github.com/dominikholler/19bcdc5f14f42ab5f069086fd2ff5e37#file-list_security_groups-py-L25
>> This should work, if not, please report a bug.
>>
>> To bypass the ovirt-provider-ovn, which is not recommended and might end in 
>> an inconsistent state, you could use ovn-nbctl .
>>
>>
>>
>> On Thu, Dec 12, 2019 at 3:33 AM Strahil Nikolov  
>> wrote:
>>>
>>> Hi Community,
>>>
>>> can someone hint me how to get rid of some ports? I just want to 'reset' my 
>>> ovn setup.
>>>
>>> Here is what I have so far:
>>>
>>> [root@ovirt1 openvswitch]# ovs-vsctl list interface  
>>> _uuid   : be89c214-10e4-4a97-a9eb-1b82bc433a24 
>>> admin_state : up 
>>> bfd : {} 
>>> bfd_status  : {} 
>>> cfm_fault   : [] 
>>> cfm_fault_status    : [] 
>>> cfm_flap_count  : [] 
>>> cfm_health  : [] 
>>> cfm_mpid    : [] 
>>> cfm_remote_mpids    : [] 
>>> cfm_remote_opstate  : [] 
>>> duplex  : [] 
>>> error   : [] 
>>> external_ids    : {} 
>>> ifindex : 35 
>>> ingress_policing_burst: 0 
>>> ingress_policing_rate: 0 
>>> lacp_current    : [] 
>>> link_resets : 0 
>>> link_speed  : [] 
>>> link_state  : up 
>>> lldp    : {} 
>>> mac : [] 
>>> mac_in_use  : "7a:7d:1d:a7:43:1d" 
>>> mtu : [] 
>>> mtu_request : [] 
>>> name    : "ovn-25cc77-0" 
>>> ofport  : 6 
>>> ofport_request  : [] 
>>> options : {csum="true", key=flow, remote_ip="192.168.1.64"} 
>>> other_config    : {} 
>>> statistics  : {rx_bytes=0, rx_packets=0, tx_bytes=0, tx_packets=0} 
>>> status  : {tunnel_egress_iface=ovirtmgmt, 
>>> tunnel_egress_iface_carrier=up} 
>>> type    : geneve 
>>>
>>> _uuid   : ec6a6688-e5d6-4346-ac47-ece1b8379440 
>>> admin_state : down 
>>> bfd : {} 
>>> bfd_status  : {} 
>>> cfm_fault   : [] 
>>> cfm_fault_status    : [] 
>>> cfm_flap_count  : [] 
>>> cfm_health  : [] 
>>> cfm_mpid    : [] 
>>> cfm_remote_mpids    : [] 
>>> cfm_remote_opstate  : [] 
>>> duplex  : [] 
>>> error   : [] 
>>> external_ids    : {} 
>>> ifindex : 13 
>>> ingress_policing_burst: 0 
>>> ingress_policing_rate: 0 
>>> lacp_current    : [] 
>>> link_resets : 0 
>>> link_speed  : [] 
>>> link_state  : down 
>>> lldp    : {} 
>>> mac : [] 
>>> mac_in_use  : "66:36:dd:63:dc:48" 
>>> mtu : 1500 
>>> mtu_request : [] 
>>> name    : br-int 
>>> ofport  : 65534 
>>> ofport_request  : [] 
>>> options : {} 
>>> other_config    : {} 
>>> statistics  : {collisions=0, rx_bytes=0, rx_crc_err=0, 
>>> rx_dropped=0, rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0, 
>>> tx_bytes=0, tx_dropped=0, tx_errors=0, tx_packets=0} 
>>> status  : {driver_name=openvswitch} 
>>> type    : internal 
>>>
>>> _uuid   : 1e511b4d-f7c2-499f-bd8c-07236e7bb7af 
>>> admin_state : up 
>>> bfd : {} 
>>> bfd_status  : {} 
>>> cfm_fault   : [] 
>>> cfm_fault_status    : [] 
>>> cfm_flap_count  : [] 
>>> cfm_health  : [] 
>>> cfm_mpid    : [] 
>>> cfm_remote_mpids    : [] 
>>> cfm_remote_opstate  : [] 
>>> duplex  : [] 
>>> error   : [] 
>>> external_ids    : {} 
>>> ifindex : 35 
>>> ingress_policing_burst: 0 
>>> ingress_policing_rate: 0 
>>> lacp_current    : [] 
>>> link_resets : 0 
>>> link_speed  : [] 
>>> link_state  : up 
>>> lldp    : {} 
>>> mac : [] 
>>> mac_in_use  : "1a:85:d1:d9:e2:a5" 
>>> mtu : [] 
>>> mtu_request : [] 
>>> name    : "ovn-566849-0" 
>>> ofport  : 5 
>>> ofport_request  : [] 
>>> options : {csum="true", key=flow, 

[ovirt-users] Re: Problems after 4.3.8 update

2019-12-12 Thread Jayme
I was able to get the hosted engine started manually via Virsh after
re-creating a missing symlink in /var/run/vdsm/storage -- I later shut it
down and am still having the same problem with ha broker starting.  It
appears that the problem *might* be with a corrupt ha metadata file,
although gluster is not stating there is split brain on the engine volume

I'm seeing this:

ls -al
/rhev/data-center/mnt/glusterSD/orchard0\:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
ls: cannot access
/rhev/data-center/mnt/glusterSD/orchard0:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/hosted-engine.metadata:
Input/output error
total 0
drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:30 .
drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
lrwxrwxrwx. 1 vdsm kvm 132 Dec 13 00:30 hosted-engine.lockspace ->
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
l?? ? ?? ?? hosted-engine.metadata

Clearly showing some sort of issue with hosted-engine.metadata on the
client mount.

on each node in /gluster_bricks I see this:

# ls -al
/gluster_bricks/engine/engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
total 0
drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:31 .
drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
lrwxrwxrwx. 2 vdsm kvm 132 Dec 13 00:31 hosted-engine.lockspace ->
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
lrwxrwxrwx. 2 vdsm kvm 132 Dec 12 16:30 hosted-engine.metadata ->
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c

 ls -al
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c
-rw-rw. 1 vdsm kvm 1073741824 Dec 12 16:48
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/66bf05fa-bf50-45ec-98d8-d2040317/a2250415-5ff0-42ab-8071-cd9d67c3048c


I'm not sure how to proceed at this point.  Do I have data corruption, a
gluster split-brain issue or something else?  Maybe I just need to
re-generate metadata for the hosted engine?

On Thu, Dec 12, 2019 at 6:36 PM Jayme  wrote:

> I'm running a three server HCI.  Up and running on 4.3.7 with no
> problems.  Today I updated to 4.3.8.  Engine upgraded fine, rebooted.
> First host updated fine, rebooted and let all gluster volumes heal.  Put
> second host in maintenance, upgraded successfully, rebooted.  Waited for
> gluster volumes to heal for over an hour but the heal process was not
> completing.  I tried restarting gluster services as well as the host with
> no success.
>
> I'm in a state right now where there are pending heals on almost all of my
> volumes.  Nothing is reporting split-brain, but the heals are not
> completing.
>
> All vms are still currently running except hosted engine.  Hosted engine
> was running but on the 2nd host I upgraded I was seeing errors such as:
>
> Dec 12 16:34:39 orchard2 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
> Failed scanning for OVF_STORE due to Command Volume.getInfo with args
> {'storagepoolID': '----',
> 'storagedomainID': 'd70b171e-7488-4d52-8cad-bbc581dbf16e', 'volumeID':
> u'2632f423-ed89-43d9-93a9-36738420b866', 'imageID':
> u'd909dc74-5bbd-4e39-b9b5-755c167a6ee8'} failed:#012(code=201,
> message=Volume does not exist: (u'2632f423-ed89-43d9-93a9-36738420b866',))
>
> I shut down the engine VM and attempted a manual heal on the engine
> volume.  I cannot start the engine on any host now.  I get:
>
> The hosted engine configuration has not been retrieved from shared
> storage. Please ensure that ovirt-ha-agent is running and the storage
> server is reachable.
>
> I'm seeing ovirt-ha-agent crashing on all three nodes:
>
> Dec 12 18:30:48 orchard0 python: detected unhandled Python exception in
> '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
> Dec 12 18:30:48 orchard0 abrt-server: Duplicate: core backtrace
> Dec 12 18:30:48 orchard0 abrt-server: DUP_OF_DIR:
> /var/tmp/abrt/Python-2019-03-14-21:02:52-44318
> Dec 12 18:30:48 orchard0 abrt-server: Deleting problem directory
> Python-2019-12-12-18:30:48-23193 (dup of Python-2019-03-14-21:02:52-44318)
> Dec 12 18:30:49 orchard0 vdsm[6087]: ERROR failed to retrieve Hosted
> Engine HA score '[Errno 2] No such file or directory'Is the Hosted Engine
> setup finished?
> Dec 12 18:30:49 orchard0 systemd: ovirt-ha-broker.service: main process
> exited, code=exited, status=1/FAILURE
> Dec 12 18:30:49 orchard0 systemd: Unit ovirt-ha-broker.service entered
> failed state.
> Dec 12 18:30:49 orchard0 systemd: ovirt-ha-broker.service failed.
> Dec 12 18:30:49 orchard0 systemd: ovirt-ha-broker.service holdoff time
> over, scheduling restart.
> Dec 12 18:30:49 orchard0 systemd: Cannot add dependency job for unit
> lvm2-lvmetad.socket, ignoring: Unit is 

[ovirt-users] Re: Still having NFS issues. (Permissions)

2019-12-12 Thread Nir Soffer
On Thu, Dec 12, 2019 at 6:36 PM Milan Zamazal  wrote:
>
> Strahil  writes:
>
> > Why do you use  'all_squash' ?
> >
> > all_squashMap all uids and gids to the anonymous user. Useful for
> > NFS-exported public FTP directories, news spool directories, etc. The
> > opposite option is no_all_squash, which is the default setting.
>
> AFAIK all_squash,anonuid=36,anongid=36 is the recommended NFS setting
> for oVirt and the only one guaranteed to work.

Any user which is not vdsm or in group kvm should not have access to
storage, so all_squash is not needed.

anonuid=36,anongid=36 is required only for root_squash, I think because libvirt
is accessing storage as root.

We probably need to add libvirt to kvm group like we do with sanlock,
so we don't
have to allow root access to storage. This how we allow sanlock access to vdsm
managed storage.

> Regards,
> Milan
>
> > Best Regards,
> > Strahil NikolovOn Dec 10, 2019 07:46, Tony Brian Albers  wrote:
> >>
> >> On Mon, 2019-12-09 at 18:43 +, Robert Webb wrote:
> >> > To add, the 757 permission does not need to be on the .lease or the
> >> > .meta files.
> >> >
> >> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/KZF6RCSRW2QV3PUEJCJW5DZ54DLAOGAA/
> >>
> >> Good morning,
> >>
> >> Check SELinux just in case.
> >>
> >> Here's my config:
> >>
> >> NFS server:
> >> /etc/exports:
> >> /data/ovirt
> >> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
> >>
> >> Folder:
> >> [root@kst001 ~]# ls -ld /data/ovirt
> >> drwxr-xr-x 3 vdsm kvm 76 Jun  1  2017 /data/ovirt
> >>
> >> Subfolders:
> >> [root@kst001 ~]# ls -l /data/ovirt/*
> >> -rwxr-xr-x 1 vdsm kvm  0 Dec 10 06:38 /data/ovirt/__DIRECT_IO_TEST__
> >>
> >> /data/ovirt/a597d0aa-bf22-47a3-a8a3-e5cecf3e20e0:
> >> total 4
> >> drwxr-xr-x  2 vdsm kvm  117 Jun  1  2017 dom_md
> >> drwxr-xr-x 56 vdsm kvm 4096 Dec  2 14:51 images
> >> drwxr-xr-x  4 vdsm kvm   42 Jun  1  2017 master
> >> [root@kst001 ~]#
> >>
> >>
> >> The user:
> >> [root@kst001 ~]# id vdsm
> >> uid=36(vdsm) gid=36(kvm) groups=36(kvm)
> >> [root@kst001 ~]#
> >>
> >> And output from 'mount' on a host:
> >> kst001:/data/ovirt on /rhev/data-center/mnt/kst001:_data_ovirt type nfs
> >> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
> >> nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr= >> server-
> >> ip>,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr= >> -server-ip>)
> >>
> >>
> >> HTH
> >>
> >> /tony
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct: 
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/T6S32XNRB6S67PH5TOZZ6ZAD6KMVA3G6/
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5XPTK5B4KTITNDRFKR3C7TQYUXQTC4A/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSSPIUYPPGSAS5TUV3GUWMWNIGGIB2NF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CO4UFLVDTSLO5S3XPA4PYXG3OGUSHSVP/


[ovirt-users] Re: Still having NFS issues. (Permissions)

2019-12-12 Thread Nir Soffer
On Fri, Dec 13, 2019 at 1:39 AM Nir Soffer  wrote:
>
> On Tue, Dec 10, 2019 at 4:35 PM Robert Webb  wrote:
>
> ...
> > >https://ovirt.org/develop/troubleshooting-nfs-storage-issues.html
> > >
> > >Generally speaking:
> > >
> > >Files there are created by vdsm (vdsmd), but are used (when running VMs)
> > >by qemu. So both of them need access.
> >
> > So the link to the NFS storage troubleshooting page is where I found that 
> > the perms needed to be 755.
>
> I think this is an error in the troubleshooting page. There is no
> reason to allow access to
> other users except vdsm:kvm.

The page mentions other daemons:

>> In principle, the user vdsm, with uid 36 and gid 36, must have read and 
>> write permissions on
>> all NFS exports. However, some daemons on the hypervisor hosts (for example, 
>> sanlock)
>> use a different uid but need access to the directory too.

But other daemon that should have access to vdsm storage are in the
kvm group (vdsm configure
this during installation):

$ id sanlock
uid=179(sanlock) gid=179(sanlock) groups=179(sanlock),6(disk),36(kvm),107(qemu)

> ...
> > Like this:
> >
> > drwxr-xr-x+ 2 vdsm kvm4096 Dec 10 09:03 .
> > drwxr-xr-x+ 3 vdsm kvm4096 Dec 10 09:02 ..
> > -rw-rw  1 vdsm kvm 53687091200 Dec 10 09:02 
> > 5a514067-82fb-42f9-b436-f8f93883fe27
> > -rw-rw  1 vdsm kvm 1048576 Dec 10 09:03 
> > 5a514067-82fb-42f9-b436-f8f93883fe27.lease
> > -rw-r--r--  1 vdsm kvm 298 Dec 10 09:03 
> > 5a514067-82fb-42f9-b436-f8f93883fe27.meta
> >
> >
> > So, with all that said, I cleaned everything up and my directory 
> > permissions look like what Tony posted for his. I have added in his export 
> > options to my setup and rebooted my host.
> >
> > I created a new VM from scratch and the files under images now look like 
> > this:
> >
> > drwxr-xr-x+ 2 vdsm kvm4096 Dec 10 09:03 .
> > drwxr-xr-x+ 3 vdsm kvm4096 Dec 10 09:02 ..
> > -rw-rw  1 vdsm kvm 53687091200 Dec 10 09:02 
> > 5a514067-82fb-42f9-b436-f8f93883fe27
> > -rw-rw  1 vdsm kvm 1048576 Dec 10 09:03 
> > 5a514067-82fb-42f9-b436-f8f93883fe27.lease
> > -rw-r--r--  1 vdsm kvm 298 Dec 10 09:03 
> > 5a514067-82fb-42f9-b436-f8f93883fe27.meta
> >
> >
> > Still not the 755 as expected,
>
> It is not expected, the permissions look normal.
>
> These are the permissions used for volumes on file based storage:
>
> lib/vdsm/storage/constants.py:FILE_VOLUME_PERMISSIONS = 0o660
>
> but I am guessing with the addition of the "anonuid=36,anongid=36" to
> the exports, everything is now working as expected. The VM will boot
> and run as expected. There was nothing in the any of the documentation
> which alluded to possibly needed the additional options in the NFS
> export options.
>
> I this is a libvirt issue, it tries to access volumes as root, and
> without anonuid=36,anongid=36
> it will be squashed to nobody and fail.
>
> Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3KZII244RKMFPKSYD5WJ47IES4XLT2LI/


[ovirt-users] Re: Still having NFS issues. (Permissions)

2019-12-12 Thread Nir Soffer
On Tue, Dec 10, 2019 at 4:35 PM Robert Webb  wrote:

...
> >https://ovirt.org/develop/troubleshooting-nfs-storage-issues.html
> >
> >Generally speaking:
> >
> >Files there are created by vdsm (vdsmd), but are used (when running VMs)
> >by qemu. So both of them need access.
>
> So the link to the NFS storage troubleshooting page is where I found that the 
> perms needed to be 755.

I think this is an error in the troubleshooting page. There is no
reason to allow access to
other users except vdsm:kvm.

...
> Like this:
>
> drwxr-xr-x+ 2 vdsm kvm4096 Dec 10 09:03 .
> drwxr-xr-x+ 3 vdsm kvm4096 Dec 10 09:02 ..
> -rw-rw  1 vdsm kvm 53687091200 Dec 10 09:02 
> 5a514067-82fb-42f9-b436-f8f93883fe27
> -rw-rw  1 vdsm kvm 1048576 Dec 10 09:03 
> 5a514067-82fb-42f9-b436-f8f93883fe27.lease
> -rw-r--r--  1 vdsm kvm 298 Dec 10 09:03 
> 5a514067-82fb-42f9-b436-f8f93883fe27.meta
>
>
> So, with all that said, I cleaned everything up and my directory permissions 
> look like what Tony posted for his. I have added in his export options to my 
> setup and rebooted my host.
>
> I created a new VM from scratch and the files under images now look like this:
>
> drwxr-xr-x+ 2 vdsm kvm4096 Dec 10 09:03 .
> drwxr-xr-x+ 3 vdsm kvm4096 Dec 10 09:02 ..
> -rw-rw  1 vdsm kvm 53687091200 Dec 10 09:02 
> 5a514067-82fb-42f9-b436-f8f93883fe27
> -rw-rw  1 vdsm kvm 1048576 Dec 10 09:03 
> 5a514067-82fb-42f9-b436-f8f93883fe27.lease
> -rw-r--r--  1 vdsm kvm 298 Dec 10 09:03 
> 5a514067-82fb-42f9-b436-f8f93883fe27.meta
>
>
> Still not the 755 as expected,

It is not expected, the permissions look normal.

These are the permissions used for volumes on file based storage:

lib/vdsm/storage/constants.py:FILE_VOLUME_PERMISSIONS = 0o660

but I am guessing with the addition of the "anonuid=36,anongid=36" to
the exports, everything is now working as expected. The VM will boot
and run as expected. There was nothing in the any of the documentation
which alluded to possibly needed the additional options in the NFS
export options.

I this is a libvirt issue, it tries to access volumes as root, and
without anonuid=36,anongid=36
it will be squashed to nobody and fail.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D6MXQGZB2SHJ2WCKBWYXD5CQ2WBJGT5B/


[ovirt-users] Problems after 4.3.8 update

2019-12-12 Thread Jayme
I'm running a three server HCI.  Up and running on 4.3.7 with no problems.
Today I updated to 4.3.8.  Engine upgraded fine, rebooted.  First host
updated fine, rebooted and let all gluster volumes heal.  Put second host
in maintenance, upgraded successfully, rebooted.  Waited for gluster
volumes to heal for over an hour but the heal process was not completing.
I tried restarting gluster services as well as the host with no success.

I'm in a state right now where there are pending heals on almost all of my
volumes.  Nothing is reporting split-brain, but the heals are not
completing.

All vms are still currently running except hosted engine.  Hosted engine
was running but on the 2nd host I upgraded I was seeing errors such as:

Dec 12 16:34:39 orchard2 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
Failed scanning for OVF_STORE due to Command Volume.getInfo with args
{'storagepoolID': '----',
'storagedomainID': 'd70b171e-7488-4d52-8cad-bbc581dbf16e', 'volumeID':
u'2632f423-ed89-43d9-93a9-36738420b866', 'imageID':
u'd909dc74-5bbd-4e39-b9b5-755c167a6ee8'} failed:#012(code=201,
message=Volume does not exist: (u'2632f423-ed89-43d9-93a9-36738420b866',))

I shut down the engine VM and attempted a manual heal on the engine
volume.  I cannot start the engine on any host now.  I get:

The hosted engine configuration has not been retrieved from shared storage.
Please ensure that ovirt-ha-agent is running and the storage server is
reachable.

I'm seeing ovirt-ha-agent crashing on all three nodes:

Dec 12 18:30:48 orchard0 python: detected unhandled Python exception in
'/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker'
Dec 12 18:30:48 orchard0 abrt-server: Duplicate: core backtrace
Dec 12 18:30:48 orchard0 abrt-server: DUP_OF_DIR:
/var/tmp/abrt/Python-2019-03-14-21:02:52-44318
Dec 12 18:30:48 orchard0 abrt-server: Deleting problem directory
Python-2019-12-12-18:30:48-23193 (dup of Python-2019-03-14-21:02:52-44318)
Dec 12 18:30:49 orchard0 vdsm[6087]: ERROR failed to retrieve Hosted Engine
HA score '[Errno 2] No such file or directory'Is the Hosted Engine setup
finished?
Dec 12 18:30:49 orchard0 systemd: ovirt-ha-broker.service: main process
exited, code=exited, status=1/FAILURE
Dec 12 18:30:49 orchard0 systemd: Unit ovirt-ha-broker.service entered
failed state.
Dec 12 18:30:49 orchard0 systemd: ovirt-ha-broker.service failed.
Dec 12 18:30:49 orchard0 systemd: ovirt-ha-broker.service holdoff time
over, scheduling restart.
Dec 12 18:30:49 orchard0 systemd: Cannot add dependency job for unit
lvm2-lvmetad.socket, ignoring: Unit is masked.
Dec 12 18:30:49 orchard0 systemd: Stopped oVirt Hosted Engine High
Availability Communications Broker.


Here is what gluster volume heal info on engine looks like, it's similar on
other volumes as well (although more heals pending on some of those):

 gluster volume heal engine info
Brick gluster0:/gluster_bricks/engine/engine
/d70b171e-7488-4d52-8cad-bbc581dbf16e/images/d909dc74-5bbd-4e39-b9b5-755c167a6ee8/2632f423-ed89-43d9-93a9-36738420b866.meta
/d70b171e-7488-4d52-8cad-bbc581dbf16e/images/053171e4-f782-42d7-9115-c602beb3c826/627b8f93-5373-48bb-bd20-a308a455e082.meta
/d70b171e-7488-4d52-8cad-bbc581dbf16e/master/tasks/a9b11e33-9b93-46a0-a36e-85063fd53ebe.backup
/d70b171e-7488-4d52-8cad-bbc581dbf16e/dom_md/ids
Status: Connected
Number of entries: 4

Brick gluster1:/gluster_bricks/engine/engine
/d70b171e-7488-4d52-8cad-bbc581dbf16e/images/d909dc74-5bbd-4e39-b9b5-755c167a6ee8/2632f423-ed89-43d9-93a9-36738420b866.meta
/d70b171e-7488-4d52-8cad-bbc581dbf16e/master/tasks/a9b11e33-9b93-46a0-a36e-85063fd53ebe.backup
/d70b171e-7488-4d52-8cad-bbc581dbf16e/images/053171e4-f782-42d7-9115-c602beb3c826/627b8f93-5373-48bb-bd20-a308a455e082.meta
/d70b171e-7488-4d52-8cad-bbc581dbf16e/dom_md/ids
Status: Connected
Number of entries: 4

Brick gluster2:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

I don't see much in vdsm.log and gluster logs look fairly normal to me, I'm
not seeing any obvious errors in the gluster logs.

As far as I can tell the underlying storage is fine.  Why are my gluster
volumes not healing and why is self-hosted engine failing to start?

More agent and broker logs:

==> agent.log <==
MainThread::ERROR::2019-12-12
18:36:09,056::hosted_engine::559::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Failed to start necessary monitors
MainThread::ERROR::2019-12-12
18:36:09,058::agent::144::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Traceback (most recent call last):
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 131, in _run_agent
return action(he)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 55, in action_proper
return he.start_monitoring()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 432, in 

[ovirt-users] Re: Ovirt OVN help needed

2019-12-12 Thread Strahil
Hi Dominik,

Thanks for the reply.

Sadly the openstack module is missing on the engine and I have to figure it out.

Can't I just undeploy the ovn and then redeploy it back ?

Best Regards,
Strahil NikolovOn Dec 12, 2019 09:32, Dominik Holler  wrote:
>
> The cleanest way to clean up is to remove all entities on the OpenStack 
> Network API on ovirt-provider-ovn, e.g. by something like
> https://gist.github.com/dominikholler/19bcdc5f14f42ab5f069086fd2ff5e37#file-list_security_groups-py-L25
> This should work, if not, please report a bug.
>
> To bypass the ovirt-provider-ovn, which is not recommended and might end in 
> an inconsistent state, you could use ovn-nbctl .
>
>
>
> On Thu, Dec 12, 2019 at 3:33 AM Strahil Nikolov  wrote:
>>
>> Hi Community,
>>
>> can someone hint me how to get rid of some ports? I just want to 'reset' my 
>> ovn setup.
>>
>> Here is what I have so far:
>>
>> [root@ovirt1 openvswitch]# ovs-vsctl list interface  
>> _uuid   : be89c214-10e4-4a97-a9eb-1b82bc433a24 
>> admin_state : up 
>> bfd : {} 
>> bfd_status  : {} 
>> cfm_fault   : [] 
>> cfm_fault_status    : [] 
>> cfm_flap_count  : [] 
>> cfm_health  : [] 
>> cfm_mpid    : [] 
>> cfm_remote_mpids    : [] 
>> cfm_remote_opstate  : [] 
>> duplex  : [] 
>> error   : [] 
>> external_ids    : {} 
>> ifindex : 35 
>> ingress_policing_burst: 0 
>> ingress_policing_rate: 0 
>> lacp_current    : [] 
>> link_resets : 0 
>> link_speed  : [] 
>> link_state  : up 
>> lldp    : {} 
>> mac : [] 
>> mac_in_use  : "7a:7d:1d:a7:43:1d" 
>> mtu : [] 
>> mtu_request : [] 
>> name    : "ovn-25cc77-0" 
>> ofport  : 6 
>> ofport_request  : [] 
>> options : {csum="true", key=flow, remote_ip="192.168.1.64"} 
>> other_config    : {} 
>> statistics  : {rx_bytes=0, rx_packets=0, tx_bytes=0, tx_packets=0} 
>> status  : {tunnel_egress_iface=ovirtmgmt, 
>> tunnel_egress_iface_carrier=up} 
>> type    : geneve 
>>
>> _uuid   : ec6a6688-e5d6-4346-ac47-ece1b8379440 
>> admin_state : down 
>> bfd : {} 
>> bfd_status  : {} 
>> cfm_fault   : [] 
>> cfm_fault_status    : [] 
>> cfm_flap_count  : [] 
>> cfm_health  : [] 
>> cfm_mpid    : [] 
>> cfm_remote_mpids    : [] 
>> cfm_remote_opstate  : [] 
>> duplex  : [] 
>> error   : [] 
>> external_ids    : {} 
>> ifindex : 13 
>> ingress_policing_burst: 0 
>> ingress_policing_rate: 0 
>> lacp_current    : [] 
>> link_resets : 0 
>> link_speed  : [] 
>> link_state  : down 
>> lldp    : {} 
>> mac : [] 
>> mac_in_use  : "66:36:dd:63:dc:48" 
>> mtu : 1500 
>> mtu_request : [] 
>> name    : br-int 
>> ofport  : 65534 
>> ofport_request  : [] 
>> options : {} 
>> other_config    : {} 
>> statistics  : {collisions=0, rx_bytes=0, rx_crc_err=0, rx_dropped=0, 
>> rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0, tx_bytes=0, 
>> tx_dropped=0, tx_errors=0, tx_packets=0} 
>> status  : {driver_name=openvswitch} 
>> type    : internal 
>>
>> _uuid   : 1e511b4d-f7c2-499f-bd8c-07236e7bb7af 
>> admin_state : up 
>> bfd : {} 
>> bfd_status  : {} 
>> cfm_fault   : [] 
>> cfm_fault_status    : [] 
>> cfm_flap_count  : [] 
>> cfm_health  : [] 
>> cfm_mpid    : [] 
>> cfm_remote_mpids    : [] 
>> cfm_remote_opstate  : [] 
>> duplex  : [] 
>> error   : [] 
>> external_ids    : {} 
>> ifindex : 35 
>> ingress_policing_burst: 0 
>> ingress_policing_rate: 0 
>> lacp_current    : [] 
>> link_resets : 0 
>> link_speed  : [] 
>> link_state  : up 
>> lldp    : {} 
>> mac : [] 
>> mac_in_use  : "1a:85:d1:d9:e2:a5" 
>> mtu : [] 
>> mtu_request : [] 
>> name    : "ovn-566849-0" 
>> ofport  : 5 
>> ofport_request  : [] 
>> options : {csum="true", key=flow, remote_ip="192.168.1.41"} 
>> other_config    : {} 
>> statistics  : {rx_bytes=0, rx_packets=0, tx_bytes=0, tx_packets=0} 
>> status  : {tunnel_egress_iface=ovirtmgmt, 
>> tunnel_egress_iface_carrier=up} 
>> type    : geneve
>>
>>
>> When I try to remove a port - it never ends (just hanging):
>>
>> [root@ovirt1 openvswitch]# ovs-vsctl --dry-run del-port br-int ovn-25cc77-0  
>>    
>> In journal  I see only this:
>> дек 12 04:13:57 ovirt1.localdomain ovs-vsctl[22030]: 
>> ovs|1|vsctl|INFO|Called as ovs-vsctl --dry-run del-port br-int 
>> 

[ovirt-users] Re: kernel parameter "spectre_v2=retpoline"

2019-12-12 Thread Michal Skrivanek
> On 12 Dec 2019, at 17:38, Matthias Leopold 
>  wrote:
>
> Hi,
>
> I'm planning to upgrade my installation from CentOS 7.6/oVirt 4.3.5 to CentOS 
> 7.7/oVirt 4.3.7.
> According to
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.7_release_notes/new_features#enhancement_kernel
> there's a new default setting for Spectre V2 mitigation in new RHEL 7 
> installs. Shall I switch my hypervisor hosts to this setting when upgrading? 
> Would newly installed CentOS 7 oVirt hosts have that kernel parameter?

Doesn’t the same paragraph describe new install vs upgrade behavior?;)
I wouldn’t think Centos is going to be any different .

>
> thx
> Matthias
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45OHIHG3JGQQFF7HAC5OVUERDNEK6BVH/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RIKODIA7DZ6BV73HFX7PFMMASOXAL2Q2/


[ovirt-users] Hosted Engine Failover Timing

2019-12-12 Thread Robert Webb
So in doing some testing, I pulled the plug on my node where the hosted engine 
was running. Rough timing was about 3.5 minutes before the portal was available 
again.

I searched around first, but could not find if there was any way to speed of 
the detection time in order to reboot the hosted engine quicker.

Right now I am only testing this and will add in VM's later, which I understand 
should reboot a lot quicker.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DH6ZHSISOYRMVTS7BVUH3UPV5E272MOG/


[ovirt-users] Re: Cannot forward traffic through VXLAN

2019-12-12 Thread Dominik Holler
On Thu, Dec 12, 2019 at 4:27 PM  wrote:

> > On Thu, Dec 12, 2019 at 11:29 AM  >
> >
> > I see.
> > This will create an external OVN network.
> > As far as I know, OVN networks do not allow mac spoofing, even if port
> > security is disabled.
> >
> I have installed the vdsm hook for allow both promiscuous and mac-spoofing
> and have the same experience.
> So it is safe to assume that this cannot be supported in ovirt?
>


Not external logical networks, with vNIC profiles, have no network filter
during the VM is started (or the vNIC is hotplugged),
allows any MAC address. This works without any hook required.
In most simple flow for a lab would be to remove the network filter from
ovirtmgmt, attach ovirtmgmt to a VM and boot the VM.



> >
> > Are you able to use physical networks (oVirt logical network with VM
> > networking, optional VLAN tag, but not external)
> > to connect the oVirt VMs?
> >
> I can connect to VMs through the internet and IPSEC, but i wanted to
> extend them.
> Do you know of any other way where i can extend on VM network from ovirt
> to another hypervisor?
>

As I wrote above, layer 2 tunneling from one VM to another should work.
Are you force to extend the network on layer 2? If not,
two VMs connected by a tunnel or a VPN might be more straight and would
even limit layer 2 broadcasts.


> Any idea will help.
>
> Appreciate the till now assistance.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PPOE54V2SXWZUNS5WFPH4E6RQHQHKUDP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5TPQLN2D7H3L5VGHXFQ3L624PWNU6SIE/


[ovirt-users] kernel parameter "spectre_v2=retpoline"

2019-12-12 Thread Matthias Leopold

Hi,

I'm planning to upgrade my installation from CentOS 7.6/oVirt 4.3.5 to 
CentOS 7.7/oVirt 4.3.7.

According to
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.7_release_notes/new_features#enhancement_kernel
there's a new default setting for Spectre V2 mitigation in new RHEL 7 
installs. Shall I switch my hypervisor hosts to this setting when 
upgrading? Would newly installed CentOS 7 oVirt hosts have that kernel 
parameter?


thx
Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/45OHIHG3JGQQFF7HAC5OVUERDNEK6BVH/


[ovirt-users] Re: Still having NFS issues. (Permissions)

2019-12-12 Thread Milan Zamazal
Strahil  writes:

> Why do you use  'all_squash' ?
>
> all_squashMap all uids and gids to the anonymous user. Useful for
> NFS-exported public FTP directories, news spool directories, etc. The
> opposite option is no_all_squash, which is the default setting.

AFAIK all_squash,anonuid=36,anongid=36 is the recommended NFS setting
for oVirt and the only one guaranteed to work.

Regards,
Milan

> Best Regards,
> Strahil NikolovOn Dec 10, 2019 07:46, Tony Brian Albers  wrote:
>>
>> On Mon, 2019-12-09 at 18:43 +, Robert Webb wrote: 
>> > To add, the 757 permission does not need to be on the .lease or the 
>> > .meta files. 
>> > 
>> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/KZF6RCSRW2QV3PUEJCJW5DZ54DLAOGAA/
>> >  
>>
>> Good morning, 
>>
>> Check SELinux just in case. 
>>
>> Here's my config: 
>>
>> NFS server: 
>> /etc/exports: 
>> /data/ovirt 
>> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36) 
>>
>> Folder: 
>> [root@kst001 ~]# ls -ld /data/ovirt 
>> drwxr-xr-x 3 vdsm kvm 76 Jun  1  2017 /data/ovirt 
>>
>> Subfolders: 
>> [root@kst001 ~]# ls -l /data/ovirt/* 
>> -rwxr-xr-x 1 vdsm kvm  0 Dec 10 06:38 /data/ovirt/__DIRECT_IO_TEST__ 
>>
>> /data/ovirt/a597d0aa-bf22-47a3-a8a3-e5cecf3e20e0: 
>> total 4 
>> drwxr-xr-x  2 vdsm kvm  117 Jun  1  2017 dom_md 
>> drwxr-xr-x 56 vdsm kvm 4096 Dec  2 14:51 images 
>> drwxr-xr-x  4 vdsm kvm   42 Jun  1  2017 master 
>> [root@kst001 ~]# 
>>
>>
>> The user: 
>> [root@kst001 ~]# id vdsm 
>> uid=36(vdsm) gid=36(kvm) groups=36(kvm) 
>> [root@kst001 ~]# 
>>
>> And output from 'mount' on a host: 
>> kst001:/data/ovirt on /rhev/data-center/mnt/kst001:_data_ovirt type nfs 
>> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock, 
>> nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=> server- 
>> ip>,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=> -server-ip>) 
>>
>>
>> HTH 
>>
>> /tony 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/T6S32XNRB6S67PH5TOZZ6ZAD6KMVA3G6/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5XPTK5B4KTITNDRFKR3C7TQYUXQTC4A/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSSPIUYPPGSAS5TUV3GUWMWNIGGIB2NF/


[ovirt-users] Re: Did a change in Ansible 2.9 in the ovirt_vm_facts module break the hosted-engine-setup?

2019-12-12 Thread thomas
What got me derailed was the "ERROR" tag and the fact that it was the last 
thing to happen on the outside ("waiting for engine to be up"), while the 
HostedEngineLocal on the inside was looking for Gluster members it couldn't 
find...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O4I6TCEFRI66NPE757WCHVG6B5KVO6SU/


[ovirt-users] Re: Did a change in Ansible 2.9 in the ovirt_vm_facts module break the hosted-engine-setup?

2019-12-12 Thread thomas
Thanks Martin, that actually helps a lot, because I was afraid of the 
implications.

So the error must really be elsewhere. I am having another look at the logs 
from the HostedEngineLocal and it seems to complain that no Gluster members are 
up, not even the initial one.

I also saw no entries in the Postgres gluster_servers table so I, killed the 
HostedEngineLocal VM and am doing another setup run to see if I can find out 
what's going wrong.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ESVH5R2ZTFDNLYTMPYQOKFJD7WBRHR2/


[ovirt-users] Re: ovirt 4.3.7 geo-replication not working

2019-12-12 Thread Strahil
Hi Adrian,

Have you checked the following link:
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/config-backup-recovery

Best Regards,
Strahil NikolovOn Dec 12, 2019 12:35, adrianquint...@gmail.com wrote:
>
> Hi Sahina/Strahil, 
> We followed the recommended setup from gluster documentation however one of 
> my colleagues noticed a python entry in the logs, turns out it is a missing 
> sym link to a library 
>
> We created the following symlink to all  the master servers (cluster 1 oVirt 
> 1) and slave servers (Cluster 2, oVirt2) and geo-sync started working: 
> /lib64/libgfchangelog.so -> /lib64/libgfchangelog.so.0 
> 
>  
> MASTER NODE MASTER VOL    MASTER BRICK 
> SLAVE USER    SLAVE  SLAVE NODE 
> STATUS    CRAWL STATUS    LAST_SYNCED 
> 
>  
> host1.mydomain1.com    geo-master    /gluster_bricks/geo-master/geo-master    
> root   slave1.mydomain2.com::geo-slave slave1.mydomain2.com    
> Active    Changelog Crawl    2019-12-12 05:22:56 
> host2.mydomain1.com    geo-master    /gluster_bricks/geo-master/geo-master    
> root   slave1.mydomain2.com::geo-slave slave2.mydomain2.com    
> Passive    N/A N/A 
> host3.mydomain1.com    geo-master    /gluster_bricks/geo-master/geo-master    
> root   slave1.mydomain2.com::geo-slave slave3.mydomain2.com    
> Passive    N/A N/A 
> 
>  
> we still require  a bit more testing but at least it is syncing now. 
>
> I am trying to find good documentation on how to achieve geo-replication for 
> oVirt, is that something you can point me to? basically looking for a way to 
> do Geo-replication from site A to Site B, but the Geo-Replication pop up 
> window from ovirt does not seem to have the functionality to connect to a 
> slave server from another oVirt setup 
>
>
>
> As a side note, from the oVirt WEB UI the "cancel button" for the "New 
> Geo-Replication" does not seem to work: storage > volumes > "select your 
> volume" > "click 'Geo-Replication' 
>
> Any good documentation you can point me to is welcome. 
>
> thank you for the swift assistance. 
>
> Regards, 
>
> Adrian
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EQHPJ3AUTWMSCMUKNCNZTAWKPZKF54M6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BG3KSJMUMZS4KAMO3L7KTFIB3GLL65UC/


[ovirt-users] Re: encrypted GENEVE traffic

2019-12-12 Thread Pavel Nakonechnyi
On Thursday, 12 December 2019 13:09:39 CET Dominik Holler wrote:
> On Thu, Dec 12, 2019 at 12:20 PM Pavel Nakonechnyi 
> 
> wrote:
> > On Thursday, 12 December 2019 10:23:28 CET Dominik Holler wrote:
> > > On Thu, Dec 12, 2019 at 10:06 AM Pavel Nakonechnyi 
> > 
> > What creates all these chassis, mappings, encapsulations?
> 
> This is all done by OVN.
> On every host runs the ovn-controller, which configures the tunnels
> according to
> the information he gets from ovn south db.
> Please find a good entry in
> http://www.openvswitch.org/support/dist-docs/ovn-architecture.7.html
> but all documentation about OVN should work.
> 

Thank you! Was incorrectly reading about Open vSwitch, not OpenVN. :)

> > however, I was not able to find any clues where in particular it is
> > implemented...
> > 
> > Once this is understood, it will be possible to consider altering the
> > corresponding code to include ipsec-related options.
> 
> This would be amazing!

I have added some notes on my attempts to the aforementioned bug (https://
bugzilla.redhat.com/show_bug.cgi?id=1782056). So, it appears to be almost 
working.


--
WBR, Pavel
 +32478910884


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I43PGKYNTYI4ZCNHMOTR63K4P5HLEVGV/


[ovirt-users] Re: HCL: 4.3.7: Hosted engine fails

2019-12-12 Thread thomas
You may just be able to use the username and password already created during 
the installation: vdsm@ovirt/shibboleth.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TD6EBEHS7JJW3UHSNGN3JPVYX25KF5H3/


[ovirt-users] Re: Cannot forward traffic through VXLAN

2019-12-12 Thread k . betsis
> On Thu, Dec 12, 2019 at 11:29 AM  
> 
> I see.
> This will create an external OVN network.
> As far as I know, OVN networks do not allow mac spoofing, even if port
> security is disabled.
> 
I have installed the vdsm hook for allow both promiscuous and mac-spoofing and 
have the same experience.
So it is safe to assume that this cannot be supported in ovirt?
>
> Are you able to use physical networks (oVirt logical network with VM
> networking, optional VLAN tag, but not external)
> to connect the oVirt VMs?
>
I can connect to VMs through the internet and IPSEC, but i wanted to extend 
them.
Do you know of any other way where i can extend on VM network from ovirt to 
another hypervisor?
Any idea will help.

Appreciate the till now assistance. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PPOE54V2SXWZUNS5WFPH4E6RQHQHKUDP/


[ovirt-users] Re: HCL: 4.3.7: Hosted engine fails

2019-12-12 Thread Ralf Schenk
Hello,

hosted engine deployment simply fails on EPYC with 4.3.7.

See my earlier posts "HostedEngine Deployment fails on AMD EPYC 7402P
4.3.7".

I was able to get this up and running by modifying HostedEngine VM XML
while Installation tries to start the engine. Great fun !

virsh -r dumpxml HostedEngine shows:

  
    EPYC
    
    
    

which is not working. I removed the line with virt-ssbd via virsh edit
and then it was able to start Engine. To modify the defined HostedEngine
you need to create an account for libvirt via saslpasswd2 to be able to
"virsh edit HostedEngine"

saslpasswd2 -c -f /etc/libvirt/passwd.db admin@ovirt

so your Passdb shows:

sasldblistusers2 /etc/libvirt/passwd.db
vdsm@ovirt: userPassword
admin@ovirt: userPassword

If the HostedEngine runs afterwards let it run so it updates the
persitent configuration in hosted-storage. I also modified these by
untarring and maniplulating the correspondent XML-Strings !.

Really ugly and disfunctional.

Bye

Am 11.12.2019 um 14:33 schrieb Robert Webb:
> I could not find if that CPU supported SSBD or not.
>
> Log into one your nodes via console and run, "cat /proc/cpuinfo" and check 
> the "flags" section and see if SSBD is listed. If not, then look at your 
> cluster config under the "General" section and see what it has for "Cluster 
> CPU Type". Make sure it hasn't chosen a CPU type which it thinks has SSBD 
> available.
>
> I have a Xeon X5670 and it does support SSBD and there is a specific CPU type 
> selected named, "Intel Westmere IBRS SSBD Family".
>
> Hope this helps.
>
> 
> From: Christian Reiss 
> Sent: Wednesday, December 11, 2019 7:55 AM
> To: users@ovirt.org
> Subject: [ovirt-users] HCL: 4.3.7: Hosted engine fails
>
> Hey all,
>
> Using a homogeneous ovirt-node-ng-4.3.7-0.20191121.0 freshly created
> cluster using node installer I am unable to deploy the hosted engine.
> Everything else worked.
>
> In vdsm.log is a line, just after attempting to start the engine:
>
> libvirtError: the CPU is incompatible with host CPU: Host CPU does not
> provide required features: virt-ssbd
>
> I am using AMD EPYC 7282 16-Core Processors.
>
> I have attached
>
>   - vdsm.log   (during and failing the start)
>   - messages   (for bootup / libvirt messages)
>   - dmesg  (grub / boot config)
>   - deploy.log (browser output during deployment)
>   - virt-capabilites (virsh -r capabilities)
>
> I can't think -or don't know- off any other log files of interest here,
> but I am more than happy to oblige.
>
> notectl check tells me
>
> Status: OK
> Bootloader ... OK
>Layer boot entries ... OK
>Valid boot entries ... OK
> Mount points ... OK
>Separate /var ... OK
>Discard is used ... OK
> Basic storage ... OK
>Initialized VG ... OK
>Initialized Thin Pool ... OK
>Initialized LVs ... OK
> Thin storage ... OK
>Checking available space in thinpool ... OK
>Checking thinpool auto-extend ... OK
> vdsmd ... OK
>
> layers:
>ovirt-node-ng-4.3.7-0.20191121.0:
>  ovirt-node-ng-4.3.7-0.20191121.0+1
> bootloader:
>default: ovirt-node-ng-4.3.7-0.20191121.0 (3.10.0-1062.4.3.el7.x86_64)
>entries:
>  ovirt-node-ng-4.3.7-0.20191121.0 (3.10.0-1062.4.3.el7.x86_64):
>index: 0
>title: ovirt-node-ng-4.3.7-0.20191121.0 (3.10.0-1062.4.3.el7.x86_64)
>kernel:
> /boot/ovirt-node-ng-4.3.7-0.20191121.0+1/vmlinuz-3.10.0-1062.4.3.el7.x86_64
>args: "ro crashkernel=auto rd.lvm.lv=onn_node01/swap
> rd.lvm.lv=onn_node01/ovirt-node-ng-4.3.7-0.20191121.0+1 rhgb quiet
> LANG=en_GB.UTF-8 img.bootid=ovirt-node-ng-4.3.7-0.20191121.0+1"
>initrd:
> /boot/ovirt-node-ng-4.3.7-0.20191121.0+1/initramfs-3.10.0-1062.4.3.el7.x86_64.img
>root: /dev/onn_node01/ovirt-node-ng-4.3.7-0.20191121.0+1
> current_layer: ovirt-node-ng-4.3.7-0.20191121.0+1
>
>
> The odd thing is the hosted engine vm does get started during initial
> configuration and works. Just when the ansible stuff is done an its
> moved over to ha storage the CPU quirks start.
>
> So far I learned that ssbd is a mitigation protection but the flag is
> not in my cpu. Well, ssbd is virt-ssbd is not.
>
> I am *starting* with ovirt. I would really, really welcome it if
> recommendations would include clues on how to make it happen.
> I do rtfm, but I was unable to find anything (or any solution) anywhere.
> Not after 80 hours of working on this.
>
> Thank you all.
> -Chris.
>
> --
>   Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
> supp...@alpha-labs.net   \ /Campaign
>   X   against HTML
>   WEB alpha-labs.net / \   in eMails
>
>   GPG Retrieval https://gpg.christian-reiss.de
>   GPG ID ABCD43C5, 0x44E29126ABCD43C5
>   GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5
>
>   "It's better to reign in hell than to serve in heaven.",
>   

[ovirt-users] Re: Cannot forward traffic through VXLAN

2019-12-12 Thread Dominik Holler
On Thu, Dec 12, 2019 at 11:29 AM  wrote:

> > On Wed, Dec 11, 2019 at 5:31 PM  >
> > Is VyOS installed on the host, or in a VM?
> >
> VyOS is installed on the ovirt node
> >
> >
> > Does this mean that the VyOS VM on oVirt should forward layer 2 traffic
> to
> > the VyOS VM on proxmox?
> > Is there a way to share a VLAN? (This would avoid additional tunneling.)
> > Can you please share some details?
> >
> VLAN approach is not feasible unfortunatelly.
> VyOS VM on oVirt should forward Layer 2 traffic over ovirtmgmt network.
> So from oVirt's perspective there is no tunneling.
> >
> >
> > If VyOS is a VM on oVirt, network filtering should be disabled on the
> vNIC
> > profile which sends and
> > receives the unencapsulated traffic, before the oVirt VM is booted.
> >
> I have disabled all filters on the VM Network by selecting Network Port
> Security: Disabled
> >
> >
> > Don't understand.
> I have created a VM Network with no filters on ovirt named auth_net with
> the following parameters:
> 1. VM Network, check
> 2. MTU, custom 2000
> 3. Create on external provider, check
> 3a. External provider: ovirt-provider-ovn
>

I see.
This will create an external OVN network.
As far as I know, OVN networks do not allow mac spoofing, even if port
security is disabled.

Are you able to use physical networks (oVirt logical network with VM
networking, optional VLAN tag, but not external)
to connect the oVirt VMs?



> 3b. Network Port Security: Disabled
>
> This is done as to allow me to attach VMs to this network.
>
> I have attached 3 VMs on this VM Network.
> A firewall with IP e.g. 10.0.0.1
> The VyOS VM
> An LDAP VM with IP e.g. 10.0.0.5
>
> The VyOS VM is attached to the auth_net with no IP address and with L2TPv3
> via ovirtmgmt as to get the VM network Layer 2 traffic and forward it to
> the proxmox network through the VyOS routers.
> Even though i have not created any network filters traffic is dropped
> before reaching VyOS VM from the LDAP Auth server.
> TCPDUMP on the LDAP VM shows traffic leaving the LDAP VM.
> TCPDUMP on the VyOS VM does not show traffic reaching the vnic.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BOEK5LTE6CMYTUKUXDJ7ZM6HAI4YOCFR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KETATDBPV352XNGTYV4BJ3GNNLKMVDY/


[ovirt-users] Re: ovirt 4.3.7 geo-replication not working

2019-12-12 Thread adrianquintero
Forgot to add the log entry that lead us to the solution for our particular 
case:

Log = 
/var/log/glusterfs/geo-replication/geo-master_slave1.mydomain2.com_geo-slave/gsyncd.log
-
[2019-12-11 20:37:27.831976] E [syncdutils(worker 
/gluster_bricks/geo-master/geo-master):339:log_raise_exception] : FAIL:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 330, in main
func(args)
  File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 82, in 
subcmd_worker
local.service_loop(remote)
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1267, in 
service_loop
changelog_agent.init()
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 233, in 
__call__
return self.ins(self.meth, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 215, in 
__call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such file or 
directory
-
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MOD6XRKCCACLDASY6U53IHHLVQ2QPCO6/


[ovirt-users] Re: ovirt 4.3.7 geo-replication not working

2019-12-12 Thread Adrian Quintero
Hi Sunny,
Thanks  for replying, the issue was solved and added the comments to the
thread
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/ZAN3VFGL347RJZS2XEYR552XBJLYUQVS/#ZAN3VFGL347RJZS2XEYR552XBJLYUQVS

really appreciate you looking into it.

regards,

Adrian

On Thu, Dec 12, 2019 at 4:50 AM Sunny Kumar  wrote:

> Hi Adrian,
>
> If possible please share geo-rep logs, it will help to root cause.
>
> /sunny
>
> On Thu, Dec 12, 2019 at 5:43 AM Sahina Bose  wrote:
> >
> > +Sunny Kumar
> >
> > On Thu, Dec 12, 2019 at 6:33 AM Strahil  wrote:
> >>
> >> Hi Adrian,
> >>
> >> Have you checked the passwordless  rsync between master and slave
> volume nodes ?
> >>
> >> Best Regards,
> >> Strahil NikolovOn Dec 11, 2019 22:36, adrianquint...@gmail.com wrote:
> >> >
> >> > Hi,
> >> > I am trying to setup geo-replication between 2 sites, but I keep
> getting:
> >> > [root@host1 ~]#  gluster vol geo-rep geo-master 
> >> > slave1.mydomain2.com::geo-slave
> status
> >> >
> >> > MASTER NODE MASTER VOLMASTER BRICK
>  SLAVE USERSLAVE  SLAVE NODESTATUS
> CRAWL STATUSLAST_SYNCED
> >> >
> --
> >> > host1.mydomain1.comgeo-master
> /gluster_bricks/geo-master/geo-masterroot  
> slave1.mydomain2.com::geo-slave
>   N/A   FaultyN/A N/A
> >> > host2.mydomain2.comgeo-master
> /gluster_bricks/geo-master/geo-masterroot  
> slave1.mydomain2.com::geo-slave
>   N/A   FaultyN/A N/A
> >> > vmm11.virt.iad3pgeo-master
> /gluster_bricks/geo-master/geo-masterroot  
> slave1.mydomain2.com::geo-slave
>   N/A   FaultyN/A N/A
> >> >
> >> >
> >> > oVirt GUI has an icon in the volume that says "volume data is being
> geo-replicated" but we know that is not the case
> >> > From the logs i can see:
> >> > [2019-12-11 19:57:48.441557] I [fuse-bridge.c:6810:fini] 0-fuse:
> Unmounting '/tmp/gsyncd-aux-mount-5WaCmt'.
> >> > [2019-12-11 19:57:48.441578] I [fuse-bridge.c:6815:fini] 0-fuse:
> Closing fuse connection to '/tmp/gsyncd-aux-mount-5WaCmt'
> >> >
> >> > and
> >> > [2019-12-11 19:45:14.785758] I [monitor(monitor):278:monitor]
> Monitor: worker died in startup phase
> brick=/gluster_bricks/geo-master/geo-master
> >> >
> >> > thoughts?
> >> >
> >> > thanks,
> >> >
> >> > Adrian
> >> > ___
> >> > Users mailing list -- users@ovirt.org
> >> > To unsubscribe send an email to users-le...@ovirt.org
> >> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPTAODQ3Q4ZDKJ7W5BCKYC4NNM3TFQ2V/
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZAN3VFGL347RJZS2XEYR552XBJLYUQVS/
>
>

-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A35XLSWSFQ3QRG2PWFL3NXOYGXTKLORM/


[ovirt-users] Re: encrypted GENEVE traffic

2019-12-12 Thread Dominik Holler
On Thu, Dec 12, 2019 at 12:20 PM Pavel Nakonechnyi 
wrote:

> On Thursday, 12 December 2019 10:23:28 CET Dominik Holler wrote:
> > On Thu, Dec 12, 2019 at 10:06 AM Pavel Nakonechnyi 
> > > Could you direct me to the part of oVirt system which handles OVS
> tunnels
> > > creation?
> > >
> > > It seems that at some point oVirt issues a command similar to the
> > > following
> > > one:
> > >
> > > `ovs-vsctl add-port br-int ovn-xxx-0 -- set interface ovn-xxx-0 \
> > >
> > >  type=geneve options:csum=true key=flow options:remote_ip=1.1.1.1`
> > >
> > > I was not able to identify were the corresponding code is located. :(
> > >
> > > When I tried to do a bad thing, manual deletion of such tunnel
> interface:
> > >
> > > `ovs-vsctl del-port br-int ovn-xxx-0`
> > >
> > > it was immediately re-created or just was not deleted.. Still have to
> > > experiment with that..
> >
> > Yes, for VM OVS networking, oVirt does not use OVS directly, instead, OVN
> > is doing the work.
> >
> > During adding or reinstalling a host,
> >
> https://github.com/oVirt/ovirt-engine/tree/ovirt-engine-4.3/packaging/playbo
> > oks/roles/ovirt-provider-ovn-driver is triggered.
> > This triggers
> >
> https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/vdsm_tool/ovn
> > _config.py and
> >
> https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/setup
> > _ovn_controller.sh while the latter is really doing the work.
> >
> > I expect that this file has to be extended by the call from
> >
> http://docs.openvswitch.org/en/stable/tutorials/ovn-ipsec/#configuring-ovn-i
> > psec
> >
> > Maybe the
> >
> http://docs.openvswitch.org/en/stable/tutorials/ovn-ipsec/#enabling-ovn-ipse
> > c can be done in a first try manually.
> >
> > The weak point I expect is that the package  openvswitch-ipsec might be
> > missing in our repos, details in
> > http://docs.openvswitch.org/en/stable/tutorials/ipsec/#install-ovs-ipsec
> .
> >
> > In a first step, this package can be built manually.
> >
> > Any feedback on this would be very helpful, thanks for having a look!
> >
>
> Thank you, this was helpful. However, I am stuck with understanding how
> "tunnel" interfaces are created.
>
> For instance, if we execute `ovs-vsctl show` on some host, we get:
>
> ee590052-2dde-4635-94e0-a116511b9ba2
> Bridge br-int
> fail_mode: secure
> Port "ovn-6bb62c-0"
> Interface "ovn-6bb62c-0"
> type: geneve
> options: {csum="true", key=flow, remote_ip="1.1.1.2"}
> ...
>
> This relates to Southbound database stored on the engine, see the
> following
> excerpt:
>
> "Encap": {
> 
> "6d4b8487-7256-44f9-87b3-c885c7f4352f": {
> "ip": "172.18.53.202",
> "chassis_name": "6bb62cd8-6f97-47dd-8f0f-6bd1dbb73fd0",
> "options": ["map", [
> ["csum", "true"]
> ]],
> "type": "geneve"
> }
> ...
> "Chassis": {
> ...
> "8e3bac24-ef13-476c-a79b-e2c3f5e3b152": {
> "name": "6bb62cd8-6f97-47dd-8f0f-6bd1dbb73fd0",
> "hostname": "ovirt-h2.example.com",
> "encaps": ["uuid", "6d4b8487-7256-44f9-87b3-c885c7f4352f"],
> "external_ids": ["map", [
> ["datapath-type", ""],
> ["iface-types",
>
> "erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan"],
> ["ovn-bridge-mappings", ""]
> ]]
> }
> ...
>
> What creates all these chassis, mappings, encapsulations?


This is all done by OVN.
On every host runs the ovn-controller, which configures the tunnels
according to
the information he gets from ovn south db.
Please find a good entry in
http://www.openvswitch.org/support/dist-docs/ovn-architecture.7.html
but all documentation about OVN should work.


> I believe that it
> should be done by "ovirt-provider-ovn" (triggered by something from
> engine),
>

ovirt-provider-ovn is just a mapper from OpenStack Network API to OVN.
The talk in
https://archive.fosdem.org/2018/schedule/event/vai_leveraging_sdn_for_virtualization/
especially slide 26 explains more details.
You are welcome to ask more questions on this list.


> however, I was not able to find any clues where in particular it is
> implemented...
>
> Once this is understood, it will be possible to consider altering the
> corresponding code to include ipsec-related options.
>
>
This would be amazing!


>
>
> --
> WBR, Pavel
>
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YN2OH5P5JXCQL5XGIEYZHFODJVPNAQ6T/


[ovirt-users] Re: Did a change in Ansible 2.9 in the ovirt_vm_facts module break the hosted-engine-setup?

2019-12-12 Thread Martin Perina
On Thu, Dec 12, 2019 at 9:40 AM  wrote:

> This seems to be a much bigger generic issue with Ansible 2.9. Here is an
> excerpt from the release notes:
>
> "Renaming from _facts to _info
>
> Ansible 2.9 renamed a lot of modules from _facts to
> _info, because the modules do not return Ansible facts. Ansible
> facts relate to a specific host. For example, the configuration of a
> network interface, the operating system on a unix server, and the list of
> packages installed on a Windows box are all Ansible facts. The renamed
> modules return values that are not unique to the host. For example, account
> information or region data for a cloud provider. Renaming these modules
> should provide more clarity about the types of return values each set of
> modules offers."
>
> I guess that means all the oVirt playbooks need to be adapted for Ansible
> 2.9 and that evidently didn't happen or not completely.
>

We are going to adapt, but this is not a breaking change. Till Ansible 2.11
there is automatic linking between *_facts and *_info, only in 2.12 *_facts
will be removed. There is just deprecation warning about this tissue, but
no breakage.

Also please be aware that we will require Ansible 2.9 as minimum version
for oVirt 4.4.


> It would also seem to suggest that there is no automated integration
> testing before an oVirt release... which contradicts the opening clause of
> the opening phrase of the ovirt.org download page: "oVirt 4.3.7 is
> intended for production use and is available for the following platforms..."
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ROWX54XPPIGHBDRYR6VRHVFXD4WZ4VBM/
>


-- 
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CLOOMXRACNETEKCH6PA6PFH2G4RCOVNU/


[ovirt-users] Re: encrypted GENEVE traffic

2019-12-12 Thread Pavel Nakonechnyi
On Thursday, 12 December 2019 10:23:28 CET Dominik Holler wrote:
> On Thu, Dec 12, 2019 at 10:06 AM Pavel Nakonechnyi 
> > Could you direct me to the part of oVirt system which handles OVS tunnels
> > creation?
> > 
> > It seems that at some point oVirt issues a command similar to the
> > following
> > one:
> > 
> > `ovs-vsctl add-port br-int ovn-xxx-0 -- set interface ovn-xxx-0 \
> > 
> >  type=geneve options:csum=true key=flow options:remote_ip=1.1.1.1`
> > 
> > I was not able to identify were the corresponding code is located. :(
> > 
> > When I tried to do a bad thing, manual deletion of such tunnel interface:
> > 
> > `ovs-vsctl del-port br-int ovn-xxx-0`
> > 
> > it was immediately re-created or just was not deleted.. Still have to
> > experiment with that..
> 
> Yes, for VM OVS networking, oVirt does not use OVS directly, instead, OVN
> is doing the work.
> 
> During adding or reinstalling a host,
> https://github.com/oVirt/ovirt-engine/tree/ovirt-engine-4.3/packaging/playbo
> oks/roles/ovirt-provider-ovn-driver is triggered.
> This triggers
> https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/vdsm_tool/ovn
> _config.py and
> https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/setup
> _ovn_controller.sh while the latter is really doing the work.
> 
> I expect that this file has to be extended by the call from
> http://docs.openvswitch.org/en/stable/tutorials/ovn-ipsec/#configuring-ovn-i
> psec
> 
> Maybe the
> http://docs.openvswitch.org/en/stable/tutorials/ovn-ipsec/#enabling-ovn-ipse
> c can be done in a first try manually.
> 
> The weak point I expect is that the package  openvswitch-ipsec might be
> missing in our repos, details in
> http://docs.openvswitch.org/en/stable/tutorials/ipsec/#install-ovs-ipsec .
> 
> In a first step, this package can be built manually.
> 
> Any feedback on this would be very helpful, thanks for having a look!
> 

Thank you, this was helpful. However, I am stuck with understanding how 
"tunnel" interfaces are created.

For instance, if we execute `ovs-vsctl show` on some host, we get:

ee590052-2dde-4635-94e0-a116511b9ba2
Bridge br-int
fail_mode: secure
Port "ovn-6bb62c-0"
Interface "ovn-6bb62c-0"
type: geneve
options: {csum="true", key=flow, remote_ip="1.1.1.2"}
...

This relates to Southbound database stored on the engine, see the following 
excerpt:

"Encap": {

"6d4b8487-7256-44f9-87b3-c885c7f4352f": {
"ip": "172.18.53.202",
"chassis_name": "6bb62cd8-6f97-47dd-8f0f-6bd1dbb73fd0",
"options": ["map", [
["csum", "true"]
]],
"type": "geneve"
}
...
"Chassis": {
...
"8e3bac24-ef13-476c-a79b-e2c3f5e3b152": {
"name": "6bb62cd8-6f97-47dd-8f0f-6bd1dbb73fd0",
"hostname": "ovirt-h2.example.com",
"encaps": ["uuid", "6d4b8487-7256-44f9-87b3-c885c7f4352f"],
"external_ids": ["map", [
["datapath-type", ""],
["iface-types", 
"erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan"],
["ovn-bridge-mappings", ""]
]]
}
...

What creates all these chassis, mappings, encapsulations? I believe that it 
should be done by "ovirt-provider-ovn" (triggered by something from engine), 
however, I was not able to find any clues where in particular it is 
implemented...

Once this is understood, it will be possible to consider altering the 
corresponding code to include ipsec-related options.



--
WBR, Pavel


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QWRCURVOH35FYWAHHW5LR5YVYAUUJ4ZI/


[ovirt-users] Re: ovirt 4.3.7 geo-replication not working

2019-12-12 Thread adrianquintero
Hi Sahina/Strahil,
We followed the recommended setup from gluster documentation however one of my 
colleagues noticed a python entry in the logs, turns out it is a missing sym 
link to a library

We created the following symlink to all  the master servers (cluster 1 oVirt 1) 
and slave servers (Cluster 2, oVirt2) and geo-sync started working:
/lib64/libgfchangelog.so -> /lib64/libgfchangelog.so.0

MASTER NODE MASTER VOLMASTER BRICK 
SLAVE USERSLAVE  SLAVE NODE 
STATUSCRAWL STATUSLAST_SYNCED

host1.mydomain1.comgeo-master/gluster_bricks/geo-master/geo-master
root   slave1.mydomain2.com::geo-slave slave1.mydomain2.com
ActiveChangelog Crawl2019-12-12 05:22:56
host2.mydomain1.comgeo-master/gluster_bricks/geo-master/geo-master
root   slave1.mydomain2.com::geo-slave slave2.mydomain2.com
PassiveN/A N/A
host3.mydomain1.comgeo-master/gluster_bricks/geo-master/geo-master
root   slave1.mydomain2.com::geo-slave slave3.mydomain2.com
PassiveN/A N/A

we still require  a bit more testing but at least it is syncing now.

I am trying to find good documentation on how to achieve geo-replication for 
oVirt, is that something you can point me to? basically looking for a way to do 
Geo-replication from site A to Site B, but the Geo-Replication pop up window 
from ovirt does not seem to have the functionality to connect to a slave server 
from another oVirt setup



As a side note, from the oVirt WEB UI the "cancel button" for the "New 
Geo-Replication" does not seem to work: storage > volumes > "select your 
volume" > "click 'Geo-Replication' 

Any good documentation you can point me to is welcome.

thank you for the swift assistance.

Regards,

Adrian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EQHPJ3AUTWMSCMUKNCNZTAWKPZKF54M6/


[ovirt-users] Re: Cannot forward traffic through VXLAN

2019-12-12 Thread k . betsis
> On Wed, Dec 11, 2019 at 5:31 PM  
> Is VyOS installed on the host, or in a VM?
> 
VyOS is installed on the ovirt node
> 
> 
> Does this mean that the VyOS VM on oVirt should forward layer 2 traffic to
> the VyOS VM on proxmox?
> Is there a way to share a VLAN? (This would avoid additional tunneling.)
> Can you please share some details?
> 
VLAN approach is not feasible unfortunatelly.
VyOS VM on oVirt should forward Layer 2 traffic over ovirtmgmt network.
So from oVirt's perspective there is no tunneling.
> 
> 
> If VyOS is a VM on oVirt, network filtering should be disabled on the vNIC
> profile which sends and
> receives the unencapsulated traffic, before the oVirt VM is booted.
> 
I have disabled all filters on the VM Network by selecting Network Port 
Security: Disabled
> 
> 
> Don't understand.
I have created a VM Network with no filters on ovirt named auth_net with the 
following parameters:
1. VM Network, check
2. MTU, custom 2000
3. Create on external provider, check
3a. External provider: ovirt-provider-ovn
3b. Network Port Security: Disabled

This is done as to allow me to attach VMs to this network.

I have attached 3 VMs on this VM Network.
A firewall with IP e.g. 10.0.0.1
The VyOS VM
An LDAP VM with IP e.g. 10.0.0.5

The VyOS VM is attached to the auth_net with no IP address and with L2TPv3 via 
ovirtmgmt as to get the VM network Layer 2 traffic and forward it to the 
proxmox network through the VyOS routers.
Even though i have not created any network filters traffic is dropped before 
reaching VyOS VM from the LDAP Auth server.
TCPDUMP on the LDAP VM shows traffic leaving the LDAP VM.
TCPDUMP on the VyOS VM does not show traffic reaching the vnic.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BOEK5LTE6CMYTUKUXDJ7ZM6HAI4YOCFR/


[ovirt-users] Re: Issue deploying self hosted engine on new install

2019-12-12 Thread Yedidyah Bar David
On Wed, Dec 11, 2019 at 8:19 PM  wrote:
>
> Yes, I have had the same and posted about it here somewhere: I believe it's 
> an incompatible Ansible change.
>
> Here is the critical part if the message below:
> "The 'ovirt_host_facts' module has been renamed to 'ovirt_host_info', and the 
> renamed one no longer returns ansible_facts"
>
> and that change was made in the transition of Ansible 2.8* to 2.9, from what 
> I gathered.
>
> I guess I should just make it a bug report if you find the same message in 
> your logs.
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": 
> [{"address": "xdrd1022s.priv.atos.fr", "affinity_labels": [], 
> "auto_numa_status": "unknown", "certificate": {"organization": 
> "priv.atos.fr", "subject": "O=priv.atos.fr,CN=xdrd1022s.priv.atos.fr"}, 
> "cluster": {"href": 
> "/ovirt-engine/api/clusters/c407e776-1c3c-11ea-aeed-00163e56112a", "id": 
> "c407e776-1c3c-11ea-aeed-00163e56112a"}, "comment": "", "cpu": {"speed": 0.0, 
> "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], 
> "external_network_provider_configurations": [], "external_status": "ok", 
> "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": 
> "/ovirt-engine/api/hosts/a5bb73a1-f923-4568-8dda-434e07f7e243", "id": 
> "a5bb73a1-f923-4568-8dda-434e07f7e243", "katello_errata": [], "kdump_status": 
> "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 
> 0, "name": "xdrd1022s.priv.atos.fr", "network_attachments": [], "nics": [], 
> "numa_nodes": [], "numa_supporte
>  d": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 
> 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, 
> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": 
> {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": 
> "SHA256:wqpBWq9Kb9+Nb3Jwtw61QJzo+R4gGOP2dLubssU5EPs", "port": 22}, 
> "statistics": [], "status": "install_failed", 
> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], 
> "transparent_huge_pages": {"enabled": false}, "type": "rhel", 
> "unmanaged_networks": [], "update_available": false, "vgpu_placement": 
> "consolidated"}]}, "attempts": 120, "changed": false, "deprecations": 
> [{"msg": "The 'ovirt_host_facts' module has been renamed to 
> 'ovirt_host_info', and the renamed one no longer returns ansible_facts", 
> "version": "2.13"}]}

This error does make sense, and indeed in
ovirt-ansible-hosted-engine-setup we use ovirt_host_facts and
ansible_facts:

https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/tasks/bootstrap_local_vm/05_add_host.yml#L136

But:

I failed to reproduce it myself. I have CentOS 8, and, from master
nightly snapshot:

ovirt-ansible-hosted-engine-setup-1.0.35-0.1.master.20191129201201.el8.noarch
ansible-2.9.2-1.el8.noarch

And in 
ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-20191303-otxt1f.log
I did get:

2019-12-11 13:15:45,061+0200 DEBUG var changed: host "localhost" var
"host_result_up_check" type "" value: "{
"ansible_facts": {
"ovirt_hosts": [
...

I wonder how it worked for me. Would you like to open a bug, anyway?
And attached versions of relevant packages and relevant logs? Thanks!

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6HXD4IHGE4SCZ2SWGYD4XUV374DFYUJH/


[ovirt-users] Re: Mixing compute and storage without (initial) HCI

2019-12-12 Thread Gianluca Cecchi
On Wed, Dec 11, 2019 at 7:46 PM  wrote:

> Some documentation, especially on older RHEV versions seems to indicate
> that Gluster storage roles and compute server roles in an oVirt cluster or
> actually exclusive.
>
> Yet HCI is all about doing both, which is slightly confusing when you try
> to overcome HCI issues simply by running the management engine in a "side
> car", in my case simply a KVM VM running on a host that's not part of the
> HCI (attempted) cluster.
>
> So the HCI wizard failed to deploy the management VM, but left me with a
> working Gluster.
>
>
Indeed, reading latest Red Hat docs, it seems that you have two options:

a) if  you go with RHHI it is "mandatory" to use hosted engine VM
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/architecture
"
Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for
Virtualization) combines compute, storage, networking, and management
capabilities in one deployment.
"
and

https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-requirements

"
3.4. Hosted Engine virtual machine
The Hosted Engine virtual machine requires at least the following:

1 dual core CPU (1 quad core or multiple dual core CPUs recommended)
4GB RAM that is not shared with other processes (16GB recommended)
25GB of local, writable disk space (50GB recommended)
1 NIC with at least 1Gbps bandwidth
"

b) If you go with "standard" Virtualization with the option of having
external engine and you want to adopt Gluster Storage, your storage hosts
cannot be at the same time also virtualization hosts:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/chap-enabling_red_hat_storage_in_red_hat_enterprise_virtualization_manager
"
6. Install Red Hat Gluster Storage

Install the latest version of Red Hat Gluster Storage on new servers, not
the virtualization hosts.
"

It would be nice a new option to have virtualization and storage node
functionalities collapsed on the same hosts, but with an external engine

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FWXMUYOUTFCBKVE6FXS7TZYIXIOSCEGH/


[ovirt-users] Re: Gluster mount still fails on Engine deployment - any suggestions...

2019-12-12 Thread Rob
I have failed on hosted machine IP.

I set up as DHCp with etc/ hots set as well as DNS 

can I recover from this or do I need to start again ?

[ INFO  ] TASK [ovirt.hosted_engine_setup : Get target engine VM IP address 
from VDSM stats]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fail if Engine IP is different from 
engine's he_fqdn resolved IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM 
IP address is  while the engine's he_fqdn controller.kvm.private resolves to 
192.168.100.179. If you are using DHCP, check your DHCP reservation 
configuration"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing 
ansible-playbook
[ INFO  ] Stage: Clean up
[ INFO  ] Cleaning temporary resources
[ INFO  ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of 
steps]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch logs from the engine VM]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Set destination directory path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Create destination directory]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Give the vm time to flush dirty 
buffers]
[ INFO  ] ok: [localhost -> localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Copy engine logs]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in 
/etc/hosts for the local VM]
[ INFO  ] ok: [localhost]
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20191212090248.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, 
fix accordingly or re-deploy from scratch.
  Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20191212073933-kim3n9.log
You have new mail in /var/mail/root
[root@ovirt3 ~]# 
> On 5 Dec 2019, at 11:16, rob.dow...@orbitalsystems.co.uk wrote:
> 
> Hi Engine deployment fails here...
> 
> [ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is 
> "[Unexpected exception]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
> reason is \"Operation Failed\". Fault detail is \"[Unexpected exception]\". 
> HTTP response code is 400."}
> 
> However Gluster looks good...
> 
> I have reinstalled all nodes from scratch.
> 
> root@ovirt3 ~]# gluster volume status 
> Status of volume: data
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick gfs3.gluster.private:/gluster_bricks/
> data/data   49152 0  Y   3756 
> Brick gfs2.gluster.private:/gluster_bricks/
> data/data   49153 0  Y   3181 
> Brick gfs1.gluster.private:/gluster_bricks/
> data/data   49152 0  Y   15548
> Self-heal Daemon on localhost   N/A   N/AY   17602
> Self-heal Daemon on gfs1.gluster.privateN/A   N/AY   15706
> Self-heal Daemon on gfs2.gluster.privateN/A   N/AY   3348 
> 
> Task Status of Volume data
> --
> There are no active volume tasks
> 
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick gfs3.gluster.private:/gluster_bricks/
> engine/engine   49153 0  Y   3769 
> Brick gfs2.gluster.private:/gluster_bricks/
> engine/engine   49154 0  Y   3194 
> Brick gfs1.gluster.private:/gluster_bricks/
> engine/engine   49153 0  Y   15559
> Self-heal Daemon on localhost   N/A   N/AY   17602
> Self-heal Daemon on gfs1.gluster.privateN/A   N/AY   15706
> Self-heal Daemon on gfs2.gluster.privateN/A   N/AY   3348 
> 
> Task Status of Volume engine
> 

[ovirt-users] Re: encrypted GENEVE traffic

2019-12-12 Thread Dominik Holler
On Thu, Dec 12, 2019 at 10:06 AM Pavel Nakonechnyi 
wrote:

> On Wednesday, 11 December 2019 16:37:50 CET Dominik Holler wrote:
> > On Wed, Dec 11, 2019 at 1:21 PM Pavel Nakonechnyi 
> >
>
> > > Are there plans to introduce such support? (or explicitly not to..)
> >
> > The feature is tracked in
> > https://bugzilla.redhat.com/1782056
> >
> > If you would comment on the bug about your use case and why the feature
> > would be helpful in your scenario, this might help to push the feature.
> >
>
> Great, thanks, added a comment.
>
>
Thanks for helping to adjust oVirt!


>
> > > Is it possible to somehow manually configure such tunneling for
> existing
> > > virtual networks? (even in a limited way)
> >
> > I would be interested to know, how far we are away from the flow
> described
> > in
> > http://docs.openvswitch.org/en/stable/tutorials/ovn-ipsec/ .
> > I expect that the openvswitch-ipsec package is missing. Any input on this
> > is welcome.
> >
>
> Could you direct me to the part of oVirt system which handles OVS tunnels
> creation?
>
> It seems that at some point oVirt issues a command similar to the
> following
> one:
>
> `ovs-vsctl add-port br-int ovn-xxx-0 -- set interface ovn-xxx-0 \
>  type=geneve options:csum=true key=flow options:remote_ip=1.1.1.1`
>
> I was not able to identify were the corresponding code is located. :(
>
> When I tried to do a bad thing, manual deletion of such tunnel interface:
>
> `ovs-vsctl del-port br-int ovn-xxx-0`
>
> it was immediately re-created or just was not deleted.. Still have to
> experiment with that..
>
>

Yes, for VM OVS networking, oVirt does not use OVS directly, instead, OVN
is doing the work.

During adding or reinstalling a host,
https://github.com/oVirt/ovirt-engine/tree/ovirt-engine-4.3/packaging/playbooks/roles/ovirt-provider-ovn-driver
is triggered.
This triggers
https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/vdsm_tool/ovn_config.py
and
https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/setup_ovn_controller.sh
while the latter is really doing the work.

I expect that this file has to be extended by the call from
http://docs.openvswitch.org/en/stable/tutorials/ovn-ipsec/#configuring-ovn-ipsec

Maybe the
http://docs.openvswitch.org/en/stable/tutorials/ovn-ipsec/#enabling-ovn-ipsec
can be done in a first try manually.

The weak point I expect is that the package  openvswitch-ipsec might be
missing in our repos, details in
http://docs.openvswitch.org/en/stable/tutorials/ipsec/#install-ovs-ipsec .

In a first step, this package can be built manually.

Any feedback on this would be very helpful, thanks for having a look!


>
> --
> WBR, Pavel
>  +32478910884
>
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XSPAI2YDBXBEYB43P4EIAZMQPRDBTZY2/


[ovirt-users] Re: encrypted GENEVE traffic

2019-12-12 Thread Pavel Nakonechnyi
On Wednesday, 11 December 2019 16:37:50 CET Dominik Holler wrote:
> On Wed, Dec 11, 2019 at 1:21 PM Pavel Nakonechnyi 
> 

> > Are there plans to introduce such support? (or explicitly not to..)
> 
> The feature is tracked in
> https://bugzilla.redhat.com/1782056
> 
> If you would comment on the bug about your use case and why the feature
> would be helpful in your scenario, this might help to push the feature.
> 

Great, thanks, added a comment.


> > Is it possible to somehow manually configure such tunneling for existing
> > virtual networks? (even in a limited way)
> 
> I would be interested to know, how far we are away from the flow described
> in
> http://docs.openvswitch.org/en/stable/tutorials/ovn-ipsec/ .
> I expect that the openvswitch-ipsec package is missing. Any input on this
> is welcome.
> 

Could you direct me to the part of oVirt system which handles OVS tunnels 
creation?

It seems that at some point oVirt issues a command similar to the following 
one:

`ovs-vsctl add-port br-int ovn-xxx-0 -- set interface ovn-xxx-0 \
 type=geneve options:csum=true key=flow options:remote_ip=1.1.1.1`

I was not able to identify were the corresponding code is located. :(

When I tried to do a bad thing, manual deletion of such tunnel interface:

`ovs-vsctl del-port br-int ovn-xxx-0`

it was immediately re-created or just was not deleted.. Still have to 
experiment with that..


--
WBR, Pavel
 +32478910884


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FNMNKX2IPO5TLVDXKUP6XTWNJII6UZYJ/


[ovirt-users] Re: Did a change in Ansible 2.9 in the ovirt_vm_facts module break the hosted-engine-setup?

2019-12-12 Thread thomas
This seems to be a much bigger generic issue with Ansible 2.9. Here is an 
excerpt from the release notes:

"Renaming from _facts to _info

Ansible 2.9 renamed a lot of modules from _facts to 
_info, because the modules do not return Ansible facts. Ansible 
facts relate to a specific host. For example, the configuration of a network 
interface, the operating system on a unix server, and the list of packages 
installed on a Windows box are all Ansible facts. The renamed modules return 
values that are not unique to the host. For example, account information or 
region data for a cloud provider. Renaming these modules should provide more 
clarity about the types of return values each set of modules offers."

I guess that means all the oVirt playbooks need to be adapted for Ansible 2.9 
and that evidently didn't happen or not completely.

It would also seem to suggest that there is no automated integration testing 
before an oVirt release... which contradicts the opening clause of the opening 
phrase of the ovirt.org download page: "oVirt 4.3.7 is intended for production 
use and is available for the following platforms..."
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ROWX54XPPIGHBDRYR6VRHVFXD4WZ4VBM/


[ovirt-users] Re: OVN communications between hosts

2019-12-12 Thread Pavel Nakonechnyi
Hi Strahil,

On Wednesday, 11 December 2019 17:47:18 CET Strahil wrote:
> 
> Would you mind to share the list of ovn devices you have.
> Currently in UI, I don't have any network (except ovirtmgmt) and I see 
> multiplee devices.
> 
> My guess is that I should remove  all but the br-int , but I would like not 
> to kill my cluster communication :)
> 

Well, it is a bit different... In UI you see network which are explicitly 
managed by oVirt and physical devices hosts have. Thus, you see ovritmgmt and 
potentially other physical interfaces as well as vNics you created yourself.

OVN feature is provided by external provider "ovirt-provider-ovn". In case you 
created a network using this external provider, it will appear on "external 
logical networks" part of the "setup host networks" interface.

From the hosts Linux command line you will see the corresponding virtual Open 
vSwitch interfaces using `ovs-vsctl show` command. It will show "br-int" bridge 
as well as several ports associated with virtual machines and geneve tunnels 
towards other hosts.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NBWWKSOU4A5DHBDG7KTZMVYZUEBPRK35/