[ovirt-users] Re: ovirt 4.3.7 geo-replication not working

2019-12-16 Thread Sunny Kumar
Hi Adrian,

If possible please share geo-rep logs, it will help to root cause.

/sunny

On Thu, Dec 12, 2019 at 5:43 AM Sahina Bose  wrote:
>
> +Sunny Kumar
>
> On Thu, Dec 12, 2019 at 6:33 AM Strahil  wrote:
>>
>> Hi Adrian,
>>
>> Have you checked the passwordless  rsync between master and slave volume 
>> nodes ?
>>
>> Best Regards,
>> Strahil NikolovOn Dec 11, 2019 22:36, adrianquint...@gmail.com wrote:
>> >
>> > Hi,
>> > I am trying to setup geo-replication between 2 sites, but I keep getting:
>> > [root@host1 ~]#  gluster vol geo-rep geo-master 
>> > slave1.mydomain2.com::geo-slave status
>> >
>> > MASTER NODE MASTER VOLMASTER BRICK 
>> > SLAVE USERSLAVE  SLAVE NODESTATUSCRAWL 
>> > STATUSLAST_SYNCED
>> > --
>> > host1.mydomain1.comgeo-master/gluster_bricks/geo-master/geo-master 
>> >root  slave1.mydomain2.com::geo-slaveN/A   Faulty   
>> >  N/A N/A
>> > host2.mydomain2.comgeo-master/gluster_bricks/geo-master/geo-master 
>> >root  slave1.mydomain2.com::geo-slaveN/A   Faulty   
>> >  N/A N/A
>> > vmm11.virt.iad3pgeo-master/gluster_bricks/geo-master/geo-master
>> > root  slave1.mydomain2.com::geo-slaveN/A   Faulty
>> > N/A N/A
>> >
>> >
>> > oVirt GUI has an icon in the volume that says "volume data is being 
>> > geo-replicated" but we know that is not the case
>> > From the logs i can see:
>> > [2019-12-11 19:57:48.441557] I [fuse-bridge.c:6810:fini] 0-fuse: 
>> > Unmounting '/tmp/gsyncd-aux-mount-5WaCmt'.
>> > [2019-12-11 19:57:48.441578] I [fuse-bridge.c:6815:fini] 0-fuse: Closing 
>> > fuse connection to '/tmp/gsyncd-aux-mount-5WaCmt'
>> >
>> > and
>> > [2019-12-11 19:45:14.785758] I [monitor(monitor):278:monitor] Monitor: 
>> > worker died in startup phase brick=/gluster_bricks/geo-master/geo-master
>> >
>> > thoughts?
>> >
>> > thanks,
>> >
>> > Adrian
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct: 
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives: 
>> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPTAODQ3Q4ZDKJ7W5BCKYC4NNM3TFQ2V/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZAN3VFGL347RJZS2XEYR552XBJLYUQVS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPJ4KA5V2GVWAZ2ZRVKAEGPLP3D6EEXY/


[ovirt-users] Re: Ovirt OVN help needed

2019-12-16 Thread Strahil
Hi Dominik,

Thanks for your reply.

On ovirt1 I got the following:
[root@ovirt1 openvswitch]# less  ovn-controller.log-20191216.gz
2019-12-15T01:49:02.988Z|00032|vlog|INFO|opened log file 
/var/log/openvswitch/ovn-controller.log
2019-12-16T01:18:02.114Z|00033|vlog|INFO|closing log file
ovn-controller.log-20191216.gz (END)

Same is on the other node:

[root@ovirt2 openvswitch]# less ovn-controller.log-20191216.gz
2019-12-15T01:26:03.477Z|00028|vlog|INFO|opened log file 
/var/log/openvswitch/ovn-controller.log
2019-12-16T01:30:01.718Z|00029|vlog|INFO|closing log file
ovn-controller.log-20191216.gz (END)

The strange thing is that the geneve tunnels are there:


[root@ovirt1 ~]# ovs-vsctl show
c0e938f1-b5b5-4d5a-9cda-29dae2986f29
Bridge br-int
fail_mode: secure
Port "ovn-25cc77-0"
Interface "ovn-25cc77-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.64"}  
Port "ovn-566849-0"
Interface "ovn-566849-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.41"}  
Port br-int
Interface br-int
type: internal
Port "vnet2"
Interface "vnet2"
ovs_version: "2.11.0"
[root@ovirt1 ~]# ovs-vsctl list ports
ovs-vsctl: unknown table "ports"
[root@ovirt1 ~]# ovs-vsctl list port
_uuid   : fbf40569-925e-4430-a7c5-c78d58979bbc
bond_active_slave   : []
bond_downdelay  : 0
bond_fake_iface : false
bond_mode   : []
bond_updelay: 0
cvlans  : []
external_ids: {}
fake_bridge : false
interfaces  : [3207c0cb-3000-40f2-a850-83548f76f090]lacp
: []
mac : []
name: "vnet2"
other_config: {}
protected   : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics  : {}
status  : {}
tag : []
trunks  : []
vlan_mode   : []

_uuid   : 8947f82d-a089-429b-8843-71371314cb52
bond_active_slave   : []
bond_downdelay  : 0
bond_fake_iface : false
bond_mode   : []
bond_updelay: 0
cvlans  : []
external_ids: {}
fake_bridge : false
interfaces  : [ec6a6688-e5d6-4346-ac47-ece1b8379440]lacp
: []
mac : []
name: br-int
other_config: {}
protected   : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics  : {}
status  : {}
tag : []
trunks  : []
vlan_mode   : []

_uuid   : 72d612be-853e-43e9-8f5c-ce66cef0bebe
bond_active_slave   : []
bond_downdelay  : 0
bond_fake_iface : false
bond_mode   : []
bond_updelay: 0
cvlans  : []
external_ids: 
{ovn-chassis-id="5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3@192.168.1.41"}
fake_bridge : false
interfaces  : [a31574fe-515b-420b-859d-7f2ac729638f]lacp
: []
mac : []
name: "ovn-566849-0"
other_config: {}
protected   : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics  : {}
status  : {}
tag : []
trunks  : []
vlan_mode   : []

_uuid   : 2043a15f-ec39-4cc3-b875-7be00423dd7a
bond_active_slave   : []
bond_downdelay  : 0
bond_fake_iface : false
bond_mode   : []
bond_updelay: 0
cvlans  : []
external_ids: 
{ovn-chassis-id="25cc77b3-046f-45c5-af0c-ffb2f77d73f1@192.168.1.64"}
fake_bridge : false
interfaces  : [f9a9e3ff-070e-4044-b601-7f7394dc295f]lacp
: []
mac : []
name: "ovn-25cc77-0"
other_config: {}
protected   : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics  : {}
status  : {}
tag : []
trunks  : []
vlan_mode   : []
[root@ovirt1 ~]#

[root@ovirt2 ~]# ovs-vsctl show
3dbab138-6b90-44c5-af05-b8a944c9bf20
Bridge br-int
fail_mode: secure
Port "ovn-baa019-0"
Interface "ovn-baa019-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.90"}  
Port br-int
Interface br-int
type: internal
Port "vnet5"
Interface "vnet5"
Port "ovn-566849-0&

[ovirt-users] Re: Libgfapi considerations

2019-12-16 Thread Jayme
I believe the snapshot issue is only present with gluster replica 3
volumes.  I can confirm it on my replica 3 cluster

On Mon, Dec 16, 2019 at 4:18 PM Alex McWhirter  wrote:

> I also use libgfapi in prod.
>
>
> 1. This is a pretty annoying issue, i wish engine-config would look to see
> if it already enabled and just keep it that way.
>
> 2. Edit /etc/libvirt/qemu.conf and set dynamic ownership to 0, will stop
> the permission changes.
>
> 3. I don't see this error on any of my clusters, all using libgfapi.
>
>
> I also have no issues using snapshots with libgfapi, but live migration
> between storage domains indeed does not work.
>
>
> On 2019-12-16 12:46, Darrell Budic wrote:
>
> I use libgfap in production, the performance is worth a couple of quirks
> for me.
>
> - watch major version updates, they'll silently turn it off because the
> engine starts using a new version variable
> - VM/qemu security quirk that resets ownership when the VM quits, was
> supposedly fixed in 4.3.6 but I still have it happen to me, a cron'd chown
> keeps it under control for me
> - some VMs cause a libvirt/vdsmd interaction that results in failed stats
> query, and the engine thinks my VMs are offline because the stats gathering
> is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn't.
> may be my VMs though, still analyzing for specific file issues
>
> I need to spend some time doing a little more research and filing/updating
> some bug reports, but it's been a busy end of year so far...
>
>   -Darrell
>
> On Dec 14, 2019, at 5:47 PM, Strahil Nikolov 
> wrote:
>
>
> According to GlusterFS Storage Domain
> 
>
> the feature is not the default as it is incompatible with Live Storage
> Migration.
>
> Best Regards,
> Strahil Nikolov
>
>
> В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme <
> jay...@gmail.com> написа:
>
>
> Are there currently any known issues with using libgfapi in the latest
> stable version of ovirt in hci deployments?  I have recently enabled it and
> have noticed a significant (over 4x) increase in io performance on my vms.
> I'm concerned however since it does not seem to be an ovirt default
> setting.  Is libgfapi considered safe and stable to use in ovirt 4.3 hci?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJBBVEGGKHQFOGKJ5CU2/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MROFS2OS5HLZCUIJUVIJ/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KAWATJCAONOXE2HSLPXKC4YB23JE3KA/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLLC3CMLWGSWFMWTYRBTRBOD764N2EEF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GNMWDCNGJYNRMK54QCBKPVLDVIXWFT5B/


[ovirt-users] Re: NMA nodes

2019-12-16 Thread suporte
I see, Thanks 

For the huge pages, do I need to configure the kernel or just use the value of 
1048576, which is the recommended one? 

Thanks 

José 


De: "Sharon Gratch"  
Para: supo...@logicworks.pt 
Cc: "users"  
Enviadas: Domingo, 15 De Dezembro de 2019 13:43:23 
Assunto: Re: [ovirt-users] NMA nodes 

Hi, 

Enable virtual NUMA nodes and setting NUMA pinning topology for a high 
performance VM are both suggested configuration changes which are proposed to 
the user. Both are not a mandatory configuration. 
Therefore, you can still set a high performance VM even if your host doesn't 
support NUMA (i.e. only one NUMA node exists). Just ignore those 
recommendations appear on the High Performance VM popup after saving the VM's 
configuration. 

Regards. 
Sharon 

On Sun, Dec 15, 2019 at 12:10 AM < supo...@logicworks.pt > wrote: 



Hi, 

My host only shows one NUMA node. It means that I cannot setup a high 
performence VM? 

Thanks 

-- 

Jose Ferradeira 
http://www.logicworks.pt 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BFTHSBIKINIZ2JPHPWINXK53X3AX3LA4/
 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RARON5TD5XNIXCH25RP76DCTKI7EQRRZ/


[ovirt-users] Re: Ovirt OVN help needed

2019-12-16 Thread Dominik Holler
On Sat, Dec 14, 2019 at 11:36 AM Strahil Nikolov 
wrote:

> Hi Dominik,
>
> yes I was looking for those settings.
>
> I have added again the external provider , but I guess the mess is even
> bigger as I made some stupid decisions (like removing 2 port groups :)
> without knowing what I'm doing) .
> Sadly I can't remove all packages on the engine and hosts and reinstall
> them from scratch.
>
> Pip fails to install the openstacksdk (centOS7 is not great for such
> tasks) on the engine and my lack of knowledge in OVN makes it even more
> difficult.
>
> So the symptoms are that 2 machines can communicate with each other only
> if they are on the same host ,while on separate - no communications is
> happening.
>
>
This indicates that the tunnels between the hosts are not created.
Can you please check the /var/log/openvswitch/ovn-controller.log on both
hosts for errors and warnings, or share parts of the files here?
If this does not point us to a problem, ovn has to be reconfigured. If
possible, most easy way to do this would be to ensure that
ovirt-provider-ovn is the default network provider of the cluster of the
hosts, put one host after another in maintance mode and reinstall.



> How I created the network via UI:
>
> 1. Networks - new
> 2. Fill in the name
> 3. Create on external provider
> 4. Network Port security -> disabled (even undefined does not work)
> 5.Connect to physical network -> ovirtmgmt
>
>
> I would be happy to learn more about OVN and thus I would like to make it
> work.
>
> Here is some info from the engine:
>
> [root@engine ~]# ovn-nbctl show
> switch 1288ed26-471c-4bc2-8a7d-4531f306f44c
> (ovirt-pxelan-2a88b2e0-d04b-4196-ad50-074501e4ed08)
> port c1eba112-5eed-4c04-b25c-d3dcfb934546
> addresses: ["56:6f:5a:65:00:06"]
> port 8b52ab60-f474-4d51-b258-cb2e0a53c34a
> type: localnet
> addresses: ["unknown"]
> port b2753040-881b-487a-92a1-9721da749be4
> addresses: ["56:6f:5a:65:00:09"]
> [root@engine ~]# ovn-sbctl show
> Chassis "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> hostname: "ovirt3.localdomain"
> Encap geneve
> ip: "192.168.1.41"
> options: {csum="true"}
> Chassis "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> hostname: "ovirt1.localdomain"
> Encap geneve
> ip: "192.168.1.90"
> options: {csum="true"}
> Chassis "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> hostname: "ovirt2.localdomain"
> Encap geneve
> ip: "192.168.1.64"
> options: {csum="true"}
> Port_Binding "b2753040-881b-487a-92a1-9721da749be4"
> Port_Binding "c1eba112-5eed-4c04-b25c-d3dcfb934546"
>
>
> Is it possible to remove the vNICs , Virtual Network + and recreate the
> ovn db to start over ?
>

@Miguel Duarte de Mora Barroso  Is there a hardcore
way bypassing ovirt-provider-ovn to do this?


> I guess the other option is to create a VM that can be used to install
> python openstacksdk and modify via the python script from your previous
> e-mail.
>
>
Yes, a fedora VM in oVirt works great and creating a template from the
images from ovirt-image-repository is comfortable.


>
> Best Regards,
> Strahil Nikolov
>
>
> В петък, 13 декември 2019 г., 10:11:51 ч. Гринуич+2, Dominik Holler <
> dhol...@redhat.com> написа:
>
>
>
>
> On Fri, Dec 13, 2019 at 5:51 AM Strahil  wrote:
>
> Hi Dominik, All,
>
> I've checked '
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/W6U4XJHNMYMD3WIXDCPGOXLW6DFMCYIM/'
> and the user managed to clear up and start over.
>
> I have removed the ovn-external-provider  from UI, but I forgot to copy
> the data from the fields.
>
> Do you know any refference guide (or any tips & tricks) for adding OVN ?
>
>
> The ovirt-provider-ovn entity can be added to oVirt Engine as a new
> provider with
> Type: External Network Provider
> Network Plugin: oVirt Network Provider for OVN
> Provider URL: https://YOUR_ENGINE_FQDNt:9696
> Username: admin@internal
> Password: admin@interal password
> Host Name: YOUR_ENGINE_FQDN
> API Port: 35357
> API Version: v2.0
>
> Is this the information you need?
>
>
> Thanks in advance.
>
> Best Regards,
> Strahil Nikolov
> On Dec 12, 2019 20:49, Strahil  wrote:
>
> Hi Dominik,
>
> Thanks for the reply.
>
> Sadly the openstack module is missing on the engine and I have to figure
> it out.
>
> Can't I just undeploy the ovn and then redeploy it back ?
>
> Best Regards,
> Strahil Nikolov
> On Dec 12, 2019 09:32, Dominik Holler  wrote:
>
> The cleanest way to clean up is to remove all entities on the OpenStack
> Network API on ovirt-provider-ovn, e.g. by something like
>
> https://gist.github.com/dominikholler/19bcdc5f14f42ab5f069086fd2ff5e37#file-list_security_groups-py-L25
> This should work, if not, please report a bug.
>
> To bypass the ovirt-provider-ovn, which is not recommended and might end
> in an inconsistent state, you could use ovn-nbctl .
>
>
>
> On Thu, Dec 12, 2019 at 3:33 AM Strahil Nikolov 
> wrote:
>
> Hi Community,
>
> can someone hint me how to get 

[ovirt-users] Re: Cannot forward traffic through VXLAN

2019-12-16 Thread Dominik Holler
On Fri, Dec 13, 2019 at 3:56 PM  wrote:

> > On Thu, Dec 12, 2019 at 4:27 PM  >
> >
> >
> > Not external logical networks, with vNIC profiles, have no network filter
> > during the VM is started (or the vNIC is hotplugged),
> > allows any MAC address. This works without any hook required.
> > In most simple flow for a lab would be to remove the network filter from
> > ovirtmgmt, attach ovirtmgmt to a VM and boot the VM.
> >
> Well this is where theory contradicts practice...
> Based on what you say layer 2 frames would traverse the VM Network bridge
> and reach VyOS vnic, which they do not.
> Layer 2 frames are dropped after leaving the VM and before reaching the
> VyOS vnic.
> In theory if the VM bridge did not know where they should be forwarded it
> should broadcast them to all attached ports, which again it is not been
> done.
> So i am not sure if it is a bug, or a feature...
>

This works very reliably.
To check the oVirt networking related part, I tried the following setup:

VM1 <-vlan4->VM0<->ovirtmgmt<->dhcpserver/gateway

With a bridge on VM0 which connects the interfaces connected to vlan4 and
ovirtmgmt.
VM0 was the "CentOS 8 test image v20191009 for x86_64 (280f3e8)"
from ovirt-image-repository.
I installed cockpit in VM0 and added a bridge on cockpit web UI over the
two virtual NICs on VM0.

VM1 was able to get an IP address via DHCP and ping through the gateway to
the outside world.

Are you able to replicate this as a first step to isolate the problem?



> >
> >
> > As I wrote above, layer 2 tunneling from one VM to another should work.
> > Are you force to extend the network on layer 2? If not,
> > two VMs connected by a tunnel or a VPN might be more straight and would
> > even limit layer 2 broadcasts.
> I agree Layer 3 would be the best way forward but we need layer 2
> extension since the firewalls require it for high availability as well and
> we need pcsd VIPs attached to monitored services to have high availability.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFV4A4YIDL7TFH2DQ3HYMO6UK5DLIIQT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YS4OZLQYA23DVQJSOTCKJTRZGWCQCKMW/


[ovirt-users] Re: Libgfapi considerations

2019-12-16 Thread Alex McWhirter
I also use libgfapi in prod. 


1. This is a pretty annoying issue, i wish engine-config would look to
see if it already enabled and just keep it that way. 


2. Edit /etc/libvirt/qemu.conf and set dynamic ownership to 0, will stop
the permission changes. 

3. I don't see this error on any of my clusters, all using libgfapi. 


I also have no issues using snapshots with libgfapi, but live migration
between storage domains indeed does not work. 


On 2019-12-16 12:46, Darrell Budic wrote:

I use libgfap in production, the performance is worth a couple of quirks for me. 

- watch major version updates, they'll silently turn it off because the engine starts using a new version variable 
- VM/qemu security quirk that resets ownership when the VM quits, was supposedly fixed in 4.3.6 but I still have it happen to me, a cron'd chown keeps it under control for me 
- some VMs cause a libvirt/vdsmd interaction that results in failed stats query, and the engine thinks my VMs are offline because the stats gathering is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn't. may be my VMs though, still analyzing for specific file issues 

I need to spend some time doing a little more research and filing/updating some bug reports, but it's been a busy end of year so far... 


-Darrell

On Dec 14, 2019, at 5:47 PM, Strahil Nikolov  wrote: 

According to GlusterFS Storage Domain [1]  
the feature is not the default as it is incompatible with Live Storage Migration. 

Best Regards, 
Strahil Nikolov 

В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme  написа: 


Are there currently any known issues with using libgfapi in the latest stable 
version of ovirt in hci deployments?  I have recently enabled it and have 
noticed a significant (over 4x) increase in io performance on my vms. I'm 
concerned however since it does not seem to be an ovirt default setting.  Is 
libgfapi considered safe and stable to use in ovirt 4.3 hci? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJBBVEGGKHQFOGKJ5CU2/
 ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MROFS2OS5HLZCUIJUVIJ/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KAWATJCAONOXE2HSLPXKC4YB23JE3KA/




Links:
--
[1]
https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain.html___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLLC3CMLWGSWFMWTYRBTRBOD764N2EEF/


[ovirt-users] Re: Libgfapi considerations

2019-12-16 Thread Jayme
The performance is certainly attractive from the minimal testing I've done
with it (almost 5x I/O performance increase).  For my environment I'm
hitting the snapshot bug on replica 3 setups so I cannot snapshot VMs and
doing so breaks the VM.  This is a deal breaker for me since the VM backup
software I'm using relies on snapshots.  The other of course is the lack of
HA, that I could probably work around.  Is there actually a timeline when
libgfapi is expected to be working properly?  Some of the bug reports I've
seen date back to 2017.

On Mon, Dec 16, 2019 at 1:46 PM Darrell Budic 
wrote:

> I use libgfap in production, the performance is worth a couple of quirks
> for me.
>
> - watch major version updates, they’ll silently turn it off because the
> engine starts using a new version variable
> - VM/qemu security quirk that resets ownership when the VM quits, was
> supposedly fixed in 4.3.6 but I still have it happen to me, a cron’d chown
> keeps it under control for me
> - some VMs cause a libvirt/vdsmd interaction that results in failed stats
> query, and the engine thinks my VMs are offline because the stats gathering
> is stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn’t.
> may be my VMs though, still analyzing for specific file issues
>
> I need to spend some time doing a little more research and filing/updating
> some bug reports, but it’s been a busy end of year so far…
>
>   -Darrell
>
> On Dec 14, 2019, at 5:47 PM, Strahil Nikolov 
> wrote:
>
> According to GlusterFS Storage Domain
> 
>
> the feature is not the default as it is incompatible with Live Storage
> Migration.
>
> Best Regards,
> Strahil Nikolov
>
>
> В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme <
> jay...@gmail.com> написа:
>
>
> Are there currently any known issues with using libgfapi in the latest
> stable version of ovirt in hci deployments?  I have recently enabled it and
> have noticed a significant (over 4x) increase in io performance on my vms.
> I’m concerned however since it does not seem to be an ovirt default
> setting.  Is libgfapi considered safe and stable to use in ovirt 4.3 hci?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJBBVEGGKHQFOGKJ5CU2/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MROFS2OS5HLZCUIJUVIJ/
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZPHBO66WLXTCBW5XD2A3HLBY7AFG3JAJ/


[ovirt-users] Re: Libgfapi considerations

2019-12-16 Thread Darrell Budic
I use libgfap in production, the performance is worth a couple of quirks for me.

- watch major version updates, they’ll silently turn it off because the engine 
starts using a new version variable
- VM/qemu security quirk that resets ownership when the VM quits, was 
supposedly fixed in 4.3.6 but I still have it happen to me, a cron’d chown 
keeps it under control for me
- some VMs cause a libvirt/vdsmd interaction that results in failed stats 
query, and the engine thinks my VMs are offline because the stats gathering is 
stuck. hoped a bug fix in 4.3.6 would take care of this too, but didn’t. may be 
my VMs though, still analyzing for specific file issues

I need to spend some time doing a little more research and filing/updating some 
bug reports, but it’s been a busy end of year so far…

  -Darrell

> On Dec 14, 2019, at 5:47 PM, Strahil Nikolov  wrote:
> 
> According to GlusterFS Storage Domain 
> 
>  
> the feature is not the default as it is incompatible with Live Storage 
> Migration.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> В събота, 14 декември 2019 г., 17:06:32 ч. Гринуич+2, Jayme 
>  написа:
> 
> 
> Are there currently any known issues with using libgfapi in the latest stable 
> version of ovirt in hci deployments?  I have recently enabled it and have 
> noticed a significant (over 4x) increase in io performance on my vms. I’m 
> concerned however since it does not seem to be an ovirt default setting.  Is 
> libgfapi considered safe and stable to use in ovirt 4.3 hci?
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYVTG3NUIXE5LJBBVEGGKHQFOGKJ5CU2/
>  
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDZD4W5UYYR6MROFS2OS5HLZCUIJUVIJ/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KAWATJCAONOXE2HSLPXKC4YB23JE3KA/


[ovirt-users] Re: AWX and error using ovirt as an inventory source

2019-12-16 Thread Gianluca Cecchi
On Fri, Dec 13, 2019 at 11:50 PM Nathanaël Blanchet 
wrote:

> Okay I got it!
>

Thanks for your time ;-)


> container version is slightly different from awx-rpm. All happen in
> awx_web container. You must use persistent volume so as to use venv and
> project data over ephemeral container.
> You must use python 3 with ansible 2.9.x
>

OK. I scratched and started form a clean environment

cd installer && vi inventory
> custom_venv_dir=/opt/my-envs
> project_data_dir=/var/lib/awx/projects
>
>
done


> you can run installer
> ansible-playbook -i inventory install.yml
>
>
done


> check mounts
> docker inspect -f "{{ .Mounts }}" awx_web
> [{bind /opt/my-envs /opt/my-envs rw true rprivate} {bind
> /var/lib/awx/projects /var/lib/awx/projects rw true rprivate}]
>
>
confirmed


> Right from now, you modify the container itself, becareful data are not
> persistent along container lifecycle, so what follows is for POC. You
> should build your own container image with this wanted parameters.
>

ok, this will be next step of configuration phase, after verifying all is
ok. Suggestions?


> yum install -y make gcc redhat-rpm-config python3-devel libcurl-devel
> libxml2-devel
>

This command above puts in these packages:

Installing:
 gccx86_64 8.2.1-3.5.el8
 AppStream  23 M
 libxml2-devel  x86_64 2.9.7-5.el8
 AppStream 1.0 M
 python36-devel x86_64 3.6.8-2.module_el8.0.0+33+0a10c0e1
AppStream  16 k
 redhat-rpm-config  noarch 116-1.el8.0.1
 AppStream  82 k
 libcurl-devel  x86_64 7.61.1-8.el8
BaseOS820 k
Installing dependencies:
 annobinx86_64 8.64-1.el8
AppStream 187 k
 cmake-filesystem   x86_64 3.11.4-3.el8
AppStream  40 k
 cppx86_64 8.2.1-3.5.el8
 AppStream  10 M
 dwzx86_64 0.12-9.el8
AppStream 109 k
 efi-srpm-macrosnoarch 3-2.el8
 AppStream  22 k
 ghc-srpm-macrosnoarch 1.4.2-7.el8
 AppStream 9.3 k
 go-srpm-macros noarch 2-16.el8
AppStream  14 k
 islx86_64 0.16.1-6.el8
AppStream 841 k
 libmpc x86_64 1.0.2-9.el8
 AppStream  59 k
 ocaml-srpm-macros  noarch 5-4.el8
 AppStream 9.4 k
 openblas-srpm-macros   noarch 2-2.el8
 AppStream 7.9 k
 perl-srpm-macros   noarch 1-25.el8
AppStream  11 k
 platform-python-devel  x86_64 3.6.8-4.el8_0
 AppStream 242 k
 python-rpm-macros  noarch 3-37.el8
AppStream  14 k
 python-srpm-macros noarch 3-37.el8
AppStream  14 k
 python3-rpm-generators noarch 5-4.el8
 AppStream  24 k
 python3-rpm-macros noarch 3-37.el8
AppStream  13 k
 qt5-srpm-macrosnoarch 5.11.1-2.el8
AppStream  11 k
 rust-srpm-macros   noarch 5-2.el8
 AppStream 9.2 k
 file   x86_64 5.33-8.el8
BaseOS 76 k
 glibc-develx86_64 2.28-42.el8.1
 BaseOS1.0 M
 glibc-headers  x86_64 2.28-42.el8.1
 BaseOS465 k
 kernel-headers x86_64 4.18.0-80.11.2.el8_0
BaseOS1.6 M
 libgompx86_64 8.2.1-3.5.el8
 BaseOS189 k
 libxcrypt-develx86_64 4.1.1-4.el8
 BaseOS 25 k
 xz-devel   x86_64 5.2.4-3.el8
 BaseOS 62 k
 zipx86_64 3.0-23.el8
BaseOS270 k
 zlib-devel x86_64 1.2.11-10.el8
 BaseOS 56 k


> source /opt/my-envs/py36/bin/activate
>

before this step I have to create the virtualenv, so I executed:

bash-4.4# virtualenv /opt/my-envs/py36
Using base prefix '/usr'
  No LICENSE.txt / LICENSE found in source
New python executable in /opt/my-envs/py36/bin/python3.6
Also creating executable in /opt/my-envs/py36/bin/python
Installing setuptools, pip, wheel...
done.
bash-4.4#

and then:
bash-4.4# source /opt/my-envs/py36/bin/activate
(py36) bash-4.4#

pip install psutil
>

ok. Got:
Successfully installed psutil-5.6.7

export PYCURL_SSL_LIBRARY=openssl; pip install pycurl --compile
> --no-cache-dir
>

This got a problem with missing openssl-devel:

   gcc -pthread -Wno-unused-result -Wsign-compare
-DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -g -pipe -Wall
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
-fexceptions -fstack-protector-strong -grecord-gcc-switches
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection
-D_GNU_SOURCE -fPIC -fwrapv -fPIC -DPYCURL_VERSION="7.43.0.3"
-DHAVE_CURL_SSL=1 -DHAVE_CURL_OPENSSL=1 -DHAVE_CURL_SSL=1

[ovirt-users] Re: Postgres stuck on 100% after Engine upgrade from 4.2.8 to 4.3.7

2019-12-16 Thread Lucie Leistnerova

Hi Marko,

On 12/16/19 9:50 AM, Vrgotic, Marko wrote:

Hi Lucie,

My apologies for not replying sooner.
While waiting for a reply, I have decided to execute SHE restore 
exercise in order to get familiar with Restore procedure.

Therefore, I can not provide additional details.
Regardless, thank you for providing me a way to check for deadlocks.

No problem :) But DWH should not create deadlocks that sound suspicious. 
We test upgrade in regression but I'm not sure whether we would catch 
such issues. I'll check.
If I may ask, in case there was a deadlock, is there a common or 
standard way to resolve them, or would that actually require digging 
deeper into DB and process schemas?



Just restarting ovirt-engine-dwhd service should clean the locks.

Kindly awaiting your reply.

Marko Vrgotic
Sent from my iPhone


On 10 Dec 2019, at 08:10, Lucie Leistnerova  wrote:



Hi Marko,

aren't there some deadlocks present?

Try the queries here
https://wiki.postgresql.org/wiki/Lock_Monitoring

Please also check dwh log - 
/var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log


Best regards,
Lucie

On 12/9/19 2:50 PM, Vrgotic, Marko wrote:


Hi oVirt,

Since oVirt upgrade from 4.2.8 to 4.3.7, SHE postgres is running 
100% CPU with 4 processes, related to ovirt_engine_history DB:


PID USER  PR  NI    VIRT    RES    SHR S  %CPU %MEM TIME+ 
COMMAND


2617 postgres  20   0  372668  30876  26352 R 100.0  0.2 63:40.19 
postgres: ovirt_engine_history ovirt_engine_history 127.0.0.1(50226) 
INSERT


3554 postgres  20   0  368680  15120  11716 R 100.0  0.1 56:45.72 
postgres: ovirt_engine_history ovirt_engine_history 127.0.0.1(50448) 
BIND


2623 postgres  20   0  368068  17640  14896 R  99.3  0.1 42:35.36 
postgres: ovirt_engine_history ovirt_engine_history 127.0.0.1(50228) 
BIND


2880 postgres  20   0  368680  20404  17004 R  99.0  0.1 65:13.96 
postgres: ovirt_engine_history ovirt_engine_history 127.0.0.1(50252) 
BIND


Output of select * from pg_stat_activity; is giving me following:

16401 | ovirt_engine_history | 2617 |    16386 | 
ovirt_engine_history |    | 127.0.0.1   
| |   50226 | 2019-12-09 13:37:07.779261+01 | 
2019-12-09 13:37:09.555704+01 | 2019-12-09 13:39:04.915431+01 | 
2019-12-09 13:39:04.915432+01 | | | active


|   260332629 |    260332629 | INSERT INTO 
vm_disks_usage_samples_history (history_datetime,vm_id,disks_usage) 
VALUES ($1,$2,$3)


| client backend

16401 | ovirt_engine_history | 2623 |    16386 | 
ovirt_engine_history |    | 127.0.0.1   
| |   50228 | 2019-12-09 13:37:07.820982+01 | 
2019-12-09 14:00:00.042448+01 | 2019-12-09 14:00:00.997228+01 | 
2019-12-09 14:00:00.997229+01 | | | active


|   260333458 |    260332629 | SELECT

  +| 
client backend


|  |  | |      | 
| | | | | | 
|   | | |


  | |  | history_datetime,

   +|

|  |  | |  | 
| | | |   | 
| |   | | |


| |  |   vm_id,



  +|

|  |  | |  | 
 | | | | | | 
|   | | |


|     |  |   disks_usage

  +|

|  |  | |  | 
| | | | |   
|    | | | |


| |  | FROM vm_disks_usage_samples_history



+|

|  |  | |  | 
   | | | | | | 
|   | | |


|     |  | WHERE history_datetime >= '2019-12-09 
07:00:00.00+0100'


 +|

|  |  | |  | 
| | | | | 
|   |   | 
| |


| |  | AND history_datetime < '2019-12-09 
08:00:00.00+0100'