[ovirt-users] Re: OVN and MTU of vnics based on it clarification

2018-07-08 Thread Dominik Holler
On Sat, 7 Jul 2018 16:28:49 +0200
Gianluca Cecchi  wrote:

> Hello,
> I'm testing a virtual rhcs cluster based on 4 nodes that are CentOS
> 7.4 VMs. So the stack is based on Corosync/Pacemaker
> I have two oVirt hosts and so my plan is to put two VMs on first host
> and two VMs on the second host, to simulate a two sites config and
> site loss, before going to physical production config.
> Incidentally the two hypervisor hosts are indeed placed into different
> physical datacenters.
> So far so good.
> I decided to use OVN for the intracluster dedicated network
> configured for corosync (each VM has two vnics, one on production lan
> and one for intracluster).
> I detected that the cluster worked and formed (also only two nodes)
> only if the VMs run on the same host, while it seems they are not
> able to communicate when on different hosts. Ping is ok and an
> attempt of ssh session between them on intracluster lan, but cluster
> doesn't come up So after digging in past mailing list mails I found
> this recent one:
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/RMS7XFOZ67O3ERJB4ABX5MGXTE5FO2LT/
> 
> where the solution was to set 1400 for the MTU of the interfaces on
> OVN network.
> It seems it resolves the problem also in my scenario:
> - I live migrated two VMs on the second host and rhcs clusterware
> didn't complain
> - I relocated a resource group composed by several LV/FS, VIP and
> application from VM running on host1 to VM running on host2 without
> problems.
> 

There will be a new feature [1][2] about propagating the MTU of the
logical network into the guest.
In ovirt-4.2.5 the logical network MTU <= 1500 will be propagated for
clusters with switch type OVS and linux bridge, and MTU > 1500 will be
propagated only for clusters with switch type linux bridge, if the
requirements [3] are fulfilled in oVirt >= 4.2.5. OVS clusters will
work for MTU > 1500 latest in oVirt 4.3.
In this new feature a new default config setting "MTU for tunneled
networks" is introduced, which will be set initially to 1442.

> So the question is: can anyone confirm what are guidelines for
> settings vnics on OVN?

In the context of oVirt, I am only aware of [1] and [4].
Starting from oVirt 4.1 you can activate the OVN's internal dhcp server
by creating a subnet for the network [4]. The default configuration will
offer a MTU of 1442 to the guest, which is optimal for GENEVE tunneled
networks over physical networks with a MTU of 1500.

> Is there already a document in place about MTU
> settings for OVN based vnics? 

There are some documents about MTU in OpenStack referenced in [1].

> Other particular settings or
> limitations if I want to configure a vnic on OVN?
> 

libvirt's network filters are not applied to OVN networks, so you
should disable network filtering in oVirt's vNIC profile. This is
tracked in [5].


[1]
  
https://ovirt.org/develop/release-management/features/network/managed_mtu_for_vm_networks/

[2]
  https://github.com/oVirt/ovirt-site/pull/1667

[3]
  
https://ovirt.org/develop/release-management/features/network/managed_mtu_for_vm_networks/#limitations

[4]
  https://github.com/oVirt/ovirt-provider-ovn/#section-dhcp

[5]
  https://bugzilla.redhat.com/show_bug.cgi?id=1502754

> Thanks,
> 
> Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LZXNYHK554BCVHAXG2JXJ6AG3TU5DK4Y/


[ovirt-users] Re: Device Mapper Timeout when using Gluster Snapshots

2018-07-08 Thread Maor Lipchuk
Thanks for sharing!

On Sun, Jul 8, 2018 at 1:42 PM, Hesham Ahmed  wrote:

> Apparently this is known default behavior of LVM snapshots and in that
> case maybe Cockpit in oVirt node should create mountpoints using
> /dev/mapper path instead of UUID by default. The timeout issue
> persists even after switching to /dev/mapper/devices in fstab
> On Sun, Jul 8, 2018 at 12:59 PM Hesham Ahmed  wrote:
> >
> > I also noticed that Gluster Snapshots have the SAME UUID as the main
> > LV and if using UUID in fstab, the snapshot device is sometimes
> > mounted instead of the primary LV
> >
> > For instance:
> > /etc/fstab contains the following line:
> >
> > UUID=a0b85d33-7150-448a-9a70-6391750b90ad /gluster_bricks/gv01_data01
> > auto inode64,noatime,nodiratime,x-parent=dMeNGb-34lY-wFVL-WF42-
> hlpE-TteI-lMhvvt
> > 0 0
> >
> > # lvdisplay gluster00/lv01_data01
> >   --- Logical volume ---
> >   LV Path/dev/gluster00/lv01_data01
> >   LV Namelv01_data01
> >   VG Namegluster00
> >
> > # mount
> > /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0 on
> > /gluster_bricks/gv01_data01 type xfs
> > (rw,noatime,nodiratime,seclabel,attr2,inode64,sunit=
> 1024,swidth=2048,noquota)
> >
> > Notice above the device mounted at the brick mountpoint is not
> > /dev/gluster00/lv01_data01 and instead is one of the snapshot devices
> > of that LV
> >
> > # blkid
> > /dev/mapper/gluster00-lv01_shaker_com_sa:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-4ca8eef409ec4932828279efb91339de_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-59992b6c14644f13b5531a054d2aa75c_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-362b50c994b04284b1664b2e2eb49d09_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-0b3cc414f4cb4cddb6e81f162cdb7efe_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-da98ce5efda549039cf45a18e4eacbaf_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-4ea5cce4be704dd7b29986ae6698a666_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> >
> > Notice the UUID of LV and its snapshots is the same causing systemd to
> > mount one of the snapshot devices instead of LV which results in the
> > following gluster error:
> >
> > gluster> volume start gv01_data01 force
> > volume start: gv01_data01: failed: Volume id mismatch for brick
> > vhost03:/gluster_bricks/gv01_data01/gv. Expected volume id
> > be6bc69b-c6ed-4329-b300-3b9044f375e1, volume id
> > 55e97e74-12bf-48db-99bb-389bb708edb8 found
> >
> > On Sun, Jul 8, 2018 at 12:32 PM  wrote:
> > >
> > > I am facing this trouble since version 4.1 up to the latest 4.2.4, once
> > > we enable gluster snapshots and accumulate some snapshots (as few as 15
> > > snapshots per server) we start having trouble booting the server. The
> > > server enters emergency shell upon boot after timing out waiting for
> > > snapshot devices. Waiting a few minutes and pressing Control-D then
> boots the server normally. In case of very large number of snapshots (600+)
> it can take days before the sever will boot. Attaching journal
> > > log, let me know if you need any other logs.
> > >
> > > Details of the setup:
> > >
> > > 3 node hyperconverged oVirt setup (64GB RAM, 8-Core E5 Xeon)
> > > oVirt 4.2.4
> > > oVirt Node 4.2.4
> > > 10Gb Interface
> > >
> > > Thanks,
> > >
> > > Hesham S. Ahmed
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/ZOHBIIIKKVZDWGWRXKZ5GEZOADLCSGJB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/STAK2QO6Q3YDE4LAP7UBPNMEURV24WMP/


[ovirt-users] Re: fence_rhevm and ovirt 4.2

2018-07-08 Thread Gianluca Cecchi
On Sun, Jul 8, 2018 at 1:16 PM, Martin Perina  wrote:

Hi, thanks for your previous answers

- UserRole ok
>> Back in April 2017 for version 4.1.1 I had problems and it seems I had to
>> set super user privileges for the "fencing user"
>> See thread here
>> https://lists.ovirt.org/archives/list/users@ovirt.org/messag
>> e/FS5YFU5ZXIYDC5SWQY4MZS65UDKSX7JS/
>>
>> Now instead I only set UserRole for the defined "fenceuser" on the
>> virtual cluster VMS and it works ok
>>
>> - for a VM with permissions already setup:
>> [root@cl1 ~]# fence_rhevm -a 10.4.192.49 -l "fenceuser@internal" -S
>> /usr/local/bin/pwd_ovmgr01.sh -z  --ssl-insecure -o status
>> --shell-timeout=20 --power-wait=10 -n cl1
>> Status: ON
>>
>> - for a VM still without permissions
>> [root@cl1 ~]# fence_rhevm -a 10.4.192.49 -l "fenceuser@internal" -S
>> /usr/local/bin/pwd_ovmgr01.sh -z  --ssl-insecure -o status
>> --shell-timeout=20 --power-wait=10 -n cl2
>> Failed: Unable to obtain correct plug status or plug is not available
>>
>> - for a VM with permissions already setup:
>> I'm able to make power off / power on of the VM
>>
>
> ​Eli, please take a look at above, that might be the issue you saw with
> fence_rhevm
>
>
Perhaps I didn't explain clear enough.
My note was to confirm that now in my tests it is sufficient to create a
user in internal domain and assign it the UserRole permission for the
cluster VMs, to be able to use that user for fencing VMs.
While, back in 4.1.1 I was forced to create a user with superuser rights.
Or did you mean anything else when asking for Eli input?
In my case, to have the fencing agent work in my 4-virtualnodes CentOS 7.4
cluster, I created the fencing agent this way:

pcs stonith create vmfence fence_rhevm
pcmk_host_map="intracl1:cl1;intracl2:cl2;intracl3:cl3;intracl4:cl4" \
ipaddr=10.4.192.49 ssl=1 ssl_insecure=1 login="fenceuser@internal"
passwd_script="/usr/local/bin/pwd_ovmgr01.sh" \
shell_timeout=10 power_wait=10

So that now I have:

[root@cl1 ~]# pcs stonith show vmfence
 Resource: vmfence (class=stonith type=fence_rhevm)
  Attributes: ipaddr=10.4.192.49 login=fenceuser@internal
passwd_script=/usr/local/bin/pwd_ovmgr01.sh
pcmk_host_map=intracl1:cl1;intracl2:cl2;intracl3:cl3;intracl4:cl4
power_wait=10 shell_timeout=10 ssl=1 ssl_insecure=1
  Operations: monitor interval=60s (vmfence-monitor-interval-60s)
[root@cl1 ~]#

I forced a panic on a node that was providing a cluster group of resources
and it was correctly fenced (powered off / powered on) with the service
relocated on another node (after power off action has been completed)

[root@cl1 ~]# echo 1 > /proc/sys/kernel/sysrq
[root@cl1 ~]# echo c > /proc/sysrq-trigger

and this is the chain of events I see in the mean time. The VM cl1 is
indeed powered off and then on and the service relocated on cl2 in this
case and lastly cl1 rejoins the cluster after finishing power on phase.
I don't know if the error messages I see every time fence takes action are
related to more nodes trying to fence and conflicting or what...


Jul 8, 2018, 3:23:02 PM User fenceuser@internal-authz connecting from
'10.4.4.69' using session
'FC9UfrgCj9BM/CW5o6iymyMhqBXUDnNJWD20QGEqLccCMC/qYlsv4vC0SBRSlbNrtfdRCx2QmoipOdNk0UrsHQ=='
logged in.

Jul 8, 2018, 3:22:37 PM VM cl1 started on Host ov200

Jul 8, 2018, 3:22:21 PM User fenceuser@internal-authz connecting from
'10.4.4.63' using session
'6nMvcZNYs+aRBxifeA2aBaMsWVMCehCx1LLQdV5AEyM/Zrx/YihERxfLPc2KZPrvivy86rS+ml1Ic6BqnIKBNw=='
logged in.

Jul 8, 2018, 3:22:11 PM VM cl1 was started by fenceuser@internal-authz
(Host: ov200).

Jul 8, 2018, 3:22:10 PM VM cl1 is down. Exit message: Admin shut down from
the engine

Jul 8, 2018, 3:22:10 PM User fenceuser@internal-authz connecting from
'10.4.4.63' using session
'YCvbpVTy9fWAl+UB2g6hlJgqECvCWZYT0cvMxlgBTzcO2LosBh8oGPxsXBP/Y8TN0x7tYSfjxKr4al1g246nnA=='
logged in.

Jul 8, 2018, 3:22:10 PM Failed to power off VM cl1 (Host: ov200, User:
fenceuser@internal-authz).

Jul 8, 2018, 3:22:10 PM Failed to power off VM cl1 (Host: ov200, User:
fenceuser@internal-authz).

Jul 8, 2018, 3:22:10 PM VDSM ov200 command DestroyVDS failed: Virtual
machine does not exist: {'vmId': u'ff45a524-3363-4266-89fe-dfdbb87a8256'}

Jul 8, 2018, 3:22:10 PM VM cl1 powered off by fenceuser@internal-authz
(Host: ov200).

Jul 8, 2018, 3:21:59 PM User fenceuser@internal-authz connecting from
'10.4.4.63' using session
'tVmcRoYhzWOMgh/1smPidV0wFme2f5jWx3wdWDlpG7xHeUhTu4QJJk+l7u8zW2S73LK3lzai/kPtreSanBmAIA=='
logged in.

Jul 8, 2018, 3:21:48 PM User fenceuser@internal-authz connecting from
'10.4.4.69' using session 'iFBOa5vdfJBu4aqVdqJJbwm


- BTW: the monitoring action every 60 seconds generates many lines in
events list. Is there a way to bypass this eventually?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/

[ovirt-users] Re: Advice on deploying oVirt hyperconverged environment Node based

2018-07-08 Thread Tal Bar-Or
Thanks a lot.

Le dim. 8 juil. 2018 09:14, Maor Lipchuk  a écrit :

> cc'ing Denis and Sahina,
> Perhaps they can share their expirience and insights with the hyperconverged
> environment.
>
> Regards,
> Maor
>
> On Fri, Jul 6, 2018 at 9:47 AM, Tal Bar-Or  wrote:
>
>> Hello All,
>> I am about deploying a new Ovirt system for our developers that we plan
>> to be hyperconverged environment Node based.
>>
>> The system workload would be mostly used for builders that compiling our
>> code , which involves with lots of small files and intensive IO.
>> I plan to build two glustered volume "layers" one based on sas drives for
>> OS spin on, and second for Nvme based for intensive IO.
>> I would expect that the system will be resilient/high availability and in
>> the same time give enough  good IO request for vm builders that will be at
>> least 6 to 8 vm guests.
>> The system hardware would be as follows:
>> *chassis*: 4x HP DL380 gen8
>> *each server hardware:*
>> *cpu*: 2x e5-2690v2
>> *memory*:256GB
>> *Disks*:12x 1.2TB sas 10k disks , 2 mirror for os (or using usk
>> kingstone 2x 128gb mirror) rest for vm os volume.
>> *Nvme*: 2x960GB Kingstone KC1000  for builders compiling source code
>> *Network: *4 ports  Intel 10Gbit/s SFP +
>>
>> Given above configuration and theory,  my question would be what would be
>> best practice in terms of Gluster configuration 
>> *Distributed,Replicated,Distributed
>> Replicated,Dispersed,Distributed Dispersed*?
>> What is the suggestion for hardware raid type 5 or 6 , or use ZFS?
>> Network nodes communication , i intend to use 3 ports for storage
>> communication and one port for guests network , my question regarding
>> Gluster inter communication , what is better would i gail from 3x 10G LACP
>> or 1 network for each gluster volume?
>>
>> Please advice
>> Thanks
>>
>> --
>> Tal Bar-or
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4XKEEDT2HHVDQU7FZCANZ26UOMFJTBE5/
>>
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AQJFQXBPQ4OIIBHOJCHK2CXSKPX7SUXJ/


[ovirt-users] Re: Ovirt vs RHEV

2018-07-08 Thread Yaniv Lavi
Hi,
Red Hat has a upstream first policy meaning that in terms of code there is
no difference.
The main differences of a RHV subscription are:

   1. Longer support life cycle and ELS/EUS options with a security first
priorities.
   2. Access to support with different levels on SLA options.
   3. More stabilization cycles based on QA testing.
   4. Certification to partners, HW, security standards and other open
   source products.

The goals in this are to improve cost efficiencies, security and
predictability, in order to allowing customers to contribute back to
community development and innovation.


Thanks,

YANIV LAVI

SENIOR TECHNICAL PRODUCT MANAGER

Red Hat Israel Ltd. 

34 Jerusalem Road, Building A, 1st floor

Ra'anana, Israel 4350109

yl...@redhat.comT: +972-9-7692306/8272306 F: +972-9-7692223IM: ylavi
  TRIED. TESTED. TRUSTED. 
@redhatnews    Red Hat
   Red Hat




On Mon, Jul 2, 2018 at 2:08 PM Krzysztof Wajda  wrote:

> Hi,
>
> can anyone explain differences between Ovirt and RHEV in version 4.1 ? I
> know that RHEV has diffrent product life-cycle, and  there are some
> diffrences in installation process. Do you know if  there are any
> additional featuers in Ovirt or RHEV ? Do you know any oifficial statements
> from RH about differences ? From my experience there is not too much
> diffrences, but I need to prepere such a comaprison for bussines  so I
> would like to get an info from relaiable source.
>
>
> Chris
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TP3MNCO3WUC5OYQPZOMUOB4GONBBQCOM/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ST27DHLKARDO3JVTSC4JVVS7DBDZRIFL/


[ovirt-users] Re: change from LDAP to AD authentication

2018-07-08 Thread Martin Perina
On Thu, Jul 5, 2018 at 12:36 PM,  wrote:

> Hello,
>  as part of our policy I have to change from LDAP to Active
> Directory for authentication in our oVirt system.


​Hmm, do I understand that correctly that you were moving oVirt users from
some other LDAP server to AD? Any reason other than political to do that?
​

> I have managed to configure a test system that allows users to login using
> the CN (sAMAccountName) as before. The users in the system using the AD
> namespace are using their UPN for their user name.
> Do we have to copy permissions from all the old accounts to their new
> accounts or is there a way to rename them to the UPN retaining there old
> permissions?
>

​I don't think there is any other way than to copy permissions. But you can
automate the process using for example
ovirt_permissions/ovirt_permissions​_facts Ansible modules [1] or one of
our SDKs (Python, Java, Ruby).

Martin

[1]
https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html#ovirt


> Thanks,
> Paul S.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/3W3UAU3G3V53E7GT4CKT2MIH3GAFZ4DU/
>



-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQZ66LBZSP3FMMZBM3DGMD45I5552SQZ/


[ovirt-users] Re: fence_rhevm and ovirt 4.2

2018-07-08 Thread Martin Perina
On Sat, Jul 7, 2018 at 5:14 PM, Gianluca Cecchi 
wrote:

> Hello,
> I'm configuring a virtual rhcs cluster and would like to use fence_rhevm
> agent for stonith.
> As VMs composing the 4-nodes cluster I'm using CentOS 7.4 os
> with fence-agents-rhevm-4.0.11-66.el7_4.4.x86_64; I see that in 7.5 the
> agent is fence-agents-rhevm-4.0.11-86.el7_5.2.x86_64 but with no
> modification for the /usr/sbin/fence_rhevm python script apart the
> BUILD_DATE line.
>
> Some questions:
> - it seems it is still in API v3 even if deprecated; any particular
> reason? Possible to update to V4? Can I create a RFE bugzilla?
>

​We already have a bug for that:

https://bugzilla.redhat.com/show_bug.cgi?id=1402862​

Let's hope it will be delivered in CentOS 7.6


> This is in fact what I get in engine events registering connection:
>
> ​​
> Client from address "10.4.4.68"
> ​​
> is using version 3 of the API,
> ​​
> which has been deprecated since version 4.0 of the engine, and will no
> longer be supported starting with version 4.3. Make sure to update that
> client to use a supported versions of the API and the SDKs, before
> upgrading to version 4.3 of the engine.
> 7/7/184:56:11 PM
>

​We need to get this message fixed, we already know that APIv3 will not be
removed in oVirt 4.3, created https://bugzilla.redhat.com/1599054 to track
that
​

>
> User fenceuser@internal-authz connecting from '10.4.4.68' using session '
> OVgdzMofRFDS4ZKSdL83mRyGUFEdc+++onJHzGiAfpYuS07xa/
> EbBqFEPtztpwEeRzCn9mBOTGXE69rBbHlhXQ==' logged in.
> 7/7/184:56:11 PM
>
> - UserRole ok
> Back in April 2017 for version 4.1.1 I had problems and it seems I had to
> set super user privileges for the "fencing user"
> See thread here
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/
> FS5YFU5ZXIYDC5SWQY4MZS65UDKSX7JS/
>
> Now instead I only set UserRole for the defined "fenceuser" on the virtual
> cluster VMS and it works ok
>
> - for a VM with permissions already setup:
> [root@cl1 ~]# fence_rhevm -a 10.4.192.49 -l "fenceuser@internal" -S
> /usr/local/bin/pwd_ovmgr01.sh -z  --ssl-insecure -o status
> --shell-timeout=20 --power-wait=10 -n cl1
> Status: ON
>
> - for a VM still without permissions
> [root@cl1 ~]# fence_rhevm -a 10.4.192.49 -l "fenceuser@internal" -S
> /usr/local/bin/pwd_ovmgr01.sh -z  --ssl-insecure -o status
> --shell-timeout=20 --power-wait=10 -n cl2
> Failed: Unable to obtain correct plug status or plug is not available
>
> - for a VM with permissions already setup:
> I'm able to make power off / power on of the VM
>

​Eli, please take a look at above, that might be the issue you saw with
fence_rhevm


> Thanks,
> Gianluca
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/YIZUMYY5OFHIOBYVALGDKOEDYM5KMPGY/
>
>


-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YXAOD6DCNLXNW2DWYJOW7A6SWO2VFWQN/


[ovirt-users] Re: Ovirt cluster unstable; gluster to blame (again)

2018-07-08 Thread Yaniv Kaul
On Sat, Jul 7, 2018 at 8:45 AM, Jim Kusznir  wrote:

> So, I'm still at a loss...It sounds like its either insufficient ram/swap,
> or insufficient network.  It seems to be neither now.  At this point, it
> appears that gluster is just "broke" and killing my systems for no
> descernable reason.  Here's detals, all from the same system (currently
> running 3 VMs):
>
> [root@ovirt3 ~]# w
>  22:26:53 up 36 days,  4:34,  1 user,  load average: 42.78, 55.98, 53.31
> USER TTY  FROM LOGIN@   IDLE   JCPU   PCPU WHAT
> root pts/0192.168.8.90 22:262.00s  0.12s  0.11s w
>
> bwm-ng reports the highest data usage was about 6MB/s during this test
> (and that was combined; I have two different gig networks.  One gluster
> network (primary VM storage) runs on one, the other network handles
> everything else).
>
> [root@ovirt3 ~]# free -m
>   totalusedfree  shared  buff/cache
>  available
> Mem:  31996   13236 232  18   18526
>  18195
> Swap: 163831475   14908
>
> top - 22:32:56 up 36 days,  4:41,  1 user,  load average: 17.99, 39.69,
> 47.66
>

That is indeed a high load average. How many CPUs do you have, btw?


> Tasks: 407 total,   1 running, 405 sleeping,   1 stopped,   0 zombie
> %Cpu(s):  8.6 us,  2.1 sy,  0.0 ni, 87.6 id,  1.6 wa,  0.0 hi,  0.1 si,
> 0.0 st
> KiB Mem : 32764284 total,   228296 free, 13541952 used, 18994036 buff/cache
> KiB Swap: 16777212 total, 15246200 free,  1531012 used. 18643960 avail Mem
>

Can you check what's swapping here? (a tweak to top output will show that)


>
>   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+
> COMMAND
>
> 30036 qemu  20   0 6872324   5.2g  13532 S 144.6 16.5 216:14.55
> /usr/libexec/qemu-kvm -name guest=BillingWin,debug-threads=on -S -object
> secret,id=masterKey0,format=raw,file=/v+
> 28501 qemu  20   0 5034968   3.6g  12880 S  16.2 11.7  73:44.99
> /usr/libexec/qemu-kvm -name guest=FusionPBX,debug-threads=on -S -object
> secret,id=masterKey0,format=raw,file=/va+
>  2694 root  20   0 2169224  12164   3108 S   5.0  0.0   3290:42
> /usr/sbin/glusterfsd -s ovirt3.nwfiber.com --volfile-id
> data.ovirt3.nwfiber.com.gluster-brick2-data -p /var/run/+
>

This one's certainly taking quite a bit of your CPU usage overall.


> 14293 root  15  -5  944700  13356   4436 S   4.0  0.0  16:32.15
> /usr/sbin/glusterfs --volfile-server=192.168.8.11
> --volfile-server=192.168.8.12 --volfile-server=192.168.8.13 --+
>

I'm not sure what the sorting order is, but doesn't look like Gluster is
taking a lot of memory?


> 25100 vdsm   0 -20 6747440 107868  12836 S   2.3  0.3  21:35.20
> /usr/bin/python2 /usr/share/vdsm/vdsmd
>
> 28971 qemu  20   0 2842592   1.5g  13548 S   1.7  4.7 241:46.49
> /usr/libexec/qemu-kvm -name guest=unifi.palousetech.com,debug-threads=on
> -S -object secret,id=masterKey0,format=+
> 12095 root  20   0  162276   2836   1868 R   1.3  0.0   0:00.25 top
>
>
>  2708 root  20   0 1906040  12404   3080 S   1.0  0.0   1083:33
> /usr/sbin/glusterfsd -s ovirt3.nwfiber.com --volfile-id
> engine.ovirt3.nwfiber.com.gluster-brick1-engine -p /var/+
> 28623 qemu  20   0 4749536   1.7g  12896 S   0.7  5.5   4:30.64
> /usr/libexec/qemu-kvm -name guest=billing.nwfiber.com,debug-threads=on -S
> -object secret,id=masterKey0,format=ra+
>

The VMs I see here and above together account for most? (5.2+3.6+1.5+1.7 =
12GB) - still plenty of memory left.


>10 root  20   0   0  0  0 S   0.3  0.0 215:54.72
> [rcu_sched]
>
>  1030 sanlock   rt   0  773804  27908   2744 S   0.3  0.1  35:55.61
> /usr/sbin/sanlock daemon
>
>  1890 zabbix20   0   83904   1696   1612 S   0.3  0.0  24:30.63
> /usr/sbin/zabbix_agentd: collector [idle 1 sec]
>
>  2722 root  20   0 1298004   6148   2580 S   0.3  0.0  38:10.82
> /usr/sbin/glusterfsd -s ovirt3.nwfiber.com --volfile-id
> iso.ovirt3.nwfiber.com.gluster-brick4-iso -p /var/run/gl+
>  6340 root  20   0   0  0  0 S   0.3  0.0   0:04.30
> [kworker/7:0]
>
> 10652 root  20   0   0  0  0 S   0.3  0.0   0:00.23
> [kworker/u64:2]
>
> 14724 root  20   0 1076344  17400   3200 S   0.3  0.1  10:04.13
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
> /var/run/gluster/glustershd/glustershd.pid -+
> 22011 root  20   0   0  0  0 S   0.3  0.0   0:05.04
> [kworker/10:1]
>
>
> Not sure why the system load dropped other than I was trying to take a
> picture of it :)
>
> In any case, it appears that at this time, I have plenty of swap, ram, and
> network capacity, and yet things are still running very sluggish; I'm still
> getting e-mails from servers complaining about loss of communication with
> something or another; I still get e-mails from the engine about bad engine
> status, then recovery, etc.
>

1g isn't good enough for Gluster. It doesn't help that you have SSD,
because network is certainly your bot

[ovirt-users] Re: Device Mapper Timeout when using Gluster Snapshots

2018-07-08 Thread Hesham Ahmed
Apparently this is known default behavior of LVM snapshots and in that
case maybe Cockpit in oVirt node should create mountpoints using
/dev/mapper path instead of UUID by default. The timeout issue
persists even after switching to /dev/mapper/devices in fstab
On Sun, Jul 8, 2018 at 12:59 PM Hesham Ahmed  wrote:
>
> I also noticed that Gluster Snapshots have the SAME UUID as the main
> LV and if using UUID in fstab, the snapshot device is sometimes
> mounted instead of the primary LV
>
> For instance:
> /etc/fstab contains the following line:
>
> UUID=a0b85d33-7150-448a-9a70-6391750b90ad /gluster_bricks/gv01_data01
> auto 
> inode64,noatime,nodiratime,x-parent=dMeNGb-34lY-wFVL-WF42-hlpE-TteI-lMhvvt
> 0 0
>
> # lvdisplay gluster00/lv01_data01
>   --- Logical volume ---
>   LV Path/dev/gluster00/lv01_data01
>   LV Namelv01_data01
>   VG Namegluster00
>
> # mount
> /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0 on
> /gluster_bricks/gv01_data01 type xfs
> (rw,noatime,nodiratime,seclabel,attr2,inode64,sunit=1024,swidth=2048,noquota)
>
> Notice above the device mounted at the brick mountpoint is not
> /dev/gluster00/lv01_data01 and instead is one of the snapshot devices
> of that LV
>
> # blkid
> /dev/mapper/gluster00-lv01_shaker_com_sa:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-4ca8eef409ec4932828279efb91339de_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-59992b6c14644f13b5531a054d2aa75c_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-362b50c994b04284b1664b2e2eb49d09_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-0b3cc414f4cb4cddb6e81f162cdb7efe_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-da98ce5efda549039cf45a18e4eacbaf_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-4ea5cce4be704dd7b29986ae6698a666_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>
> Notice the UUID of LV and its snapshots is the same causing systemd to
> mount one of the snapshot devices instead of LV which results in the
> following gluster error:
>
> gluster> volume start gv01_data01 force
> volume start: gv01_data01: failed: Volume id mismatch for brick
> vhost03:/gluster_bricks/gv01_data01/gv. Expected volume id
> be6bc69b-c6ed-4329-b300-3b9044f375e1, volume id
> 55e97e74-12bf-48db-99bb-389bb708edb8 found
>
> On Sun, Jul 8, 2018 at 12:32 PM  wrote:
> >
> > I am facing this trouble since version 4.1 up to the latest 4.2.4, once
> > we enable gluster snapshots and accumulate some snapshots (as few as 15
> > snapshots per server) we start having trouble booting the server. The
> > server enters emergency shell upon boot after timing out waiting for
> > snapshot devices. Waiting a few minutes and pressing Control-D then boots 
> > the server normally. In case of very large number of snapshots (600+) it 
> > can take days before the sever will boot. Attaching journal
> > log, let me know if you need any other logs.
> >
> > Details of the setup:
> >
> > 3 node hyperconverged oVirt setup (64GB RAM, 8-Core E5 Xeon)
> > oVirt 4.2.4
> > oVirt Node 4.2.4
> > 10Gb Interface
> >
> > Thanks,
> >
> > Hesham S. Ahmed
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOHBIIIKKVZDWGWRXKZ5GEZOADLCSGJB/


[ovirt-users] Re: Device Mapper Timeout when using Gluster Snapshots

2018-07-08 Thread Hesham Ahmed
I also noticed that Gluster Snapshots have the SAME UUID as the main
LV and if using UUID in fstab, the snapshot device is sometimes
mounted instead of the primary LV

For instance:
/etc/fstab contains the following line:

UUID=a0b85d33-7150-448a-9a70-6391750b90ad /gluster_bricks/gv01_data01
auto inode64,noatime,nodiratime,x-parent=dMeNGb-34lY-wFVL-WF42-hlpE-TteI-lMhvvt
0 0

# lvdisplay gluster00/lv01_data01
  --- Logical volume ---
  LV Path/dev/gluster00/lv01_data01
  LV Namelv01_data01
  VG Namegluster00

# mount
/dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0 on
/gluster_bricks/gv01_data01 type xfs
(rw,noatime,nodiratime,seclabel,attr2,inode64,sunit=1024,swidth=2048,noquota)

Notice above the device mounted at the brick mountpoint is not
/dev/gluster00/lv01_data01 and instead is one of the snapshot devices
of that LV

# blkid
/dev/mapper/gluster00-lv01_shaker_com_sa:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-4ca8eef409ec4932828279efb91339de_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-59992b6c14644f13b5531a054d2aa75c_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-362b50c994b04284b1664b2e2eb49d09_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-0b3cc414f4cb4cddb6e81f162cdb7efe_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-da98ce5efda549039cf45a18e4eacbaf_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-4ea5cce4be704dd7b29986ae6698a666_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"

Notice the UUID of LV and its snapshots is the same causing systemd to
mount one of the snapshot devices instead of LV which results in the
following gluster error:

gluster> volume start gv01_data01 force
volume start: gv01_data01: failed: Volume id mismatch for brick
vhost03:/gluster_bricks/gv01_data01/gv. Expected volume id
be6bc69b-c6ed-4329-b300-3b9044f375e1, volume id
55e97e74-12bf-48db-99bb-389bb708edb8 found

On Sun, Jul 8, 2018 at 12:32 PM  wrote:
>
> I am facing this trouble since version 4.1 up to the latest 4.2.4, once
> we enable gluster snapshots and accumulate some snapshots (as few as 15
> snapshots per server) we start having trouble booting the server. The
> server enters emergency shell upon boot after timing out waiting for
> snapshot devices. Waiting a few minutes and pressing Control-D then boots the 
> server normally. In case of very large number of snapshots (600+) it can take 
> days before the sever will boot. Attaching journal
> log, let me know if you need any other logs.
>
> Details of the setup:
>
> 3 node hyperconverged oVirt setup (64GB RAM, 8-Core E5 Xeon)
> oVirt 4.2.4
> oVirt Node 4.2.4
> 10Gb Interface
>
> Thanks,
>
> Hesham S. Ahmed
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4K7KRQNLSC24XGGNJDOH5LIVLXKRLRPA/


[ovirt-users] Device Mapper Timeout when using Gluster Snapshots

2018-07-08 Thread hsahmed
I am facing this trouble since version 4.1 up to the latest 4.2.4, once
we enable gluster snapshots and accumulate some snapshots (as few as 15
snapshots per server) we start having trouble booting the server. The
server enters emergency shell upon boot after timing out waiting for
snapshot devices. Waiting a few minutes and pressing Control-D then boots the 
server normally. In case of very large number of snapshots (600+) it can take 
days before the sever will boot. Attaching journal
log, let me know if you need any other logs.

Details of the setup:

3 node hyperconverged oVirt setup (64GB RAM, 8-Core E5 Xeon)
oVirt 4.2.4
oVirt Node 4.2.4
10Gb Interface

Thanks,

Hesham S. AhmedJul 08 11:57:27 vhost03 systemd[1]: Starting Activation of LVM2 logical volumes...
Jul 08 11:57:27 vhost03 systemd[1]: Started Device-mapper event daemon.
Jul 08 11:57:27 vhost03 systemd[1]: Starting Device-mapper event daemon...
Jul 08 11:57:27 vhost03 dmeventd[1079]: dmeventd ready for processing.
Jul 08 11:57:27 vhost03 lvm[1079]: Monitoring thin pool onn-pool00-tpool.
Jul 08 11:57:27 vhost03 kernel: device-mapper: thin: Data device (dm-1) discard unsupported: Disabling discard passdown.
Jul 08 11:57:27 vhost03 systemd[1]: Found device /dev/mapper/onn-home.
Jul 08 11:57:27 vhost03 systemd[1]: Found device /dev/mapper/onn-tmp.
Jul 08 11:57:28 vhost03 systemd[1]: Found device /dev/mapper/onn-var_log.
Jul 08 11:57:28 vhost03 systemd[1]: Found device /dev/mapper/onn-var.
Jul 08 11:57:28 vhost03 systemd[1]: Found device /dev/mapper/onn-var_log_audit.
Jul 08 11:57:28 vhost03 lvm[1078]: 11 logical volume(s) in volume group "onn" now active
Jul 08 11:57:28 vhost03 systemd[1]: Found device /dev/mapper/onn-var_crash.
Jul 08 11:58:46 vhost03 systemd[1]: Received SIGRTMIN+20 from PID 503 (plymouthd).
Jul 08 11:58:47 vhost03 systemd[1]: Received SIGRTMIN+20 from PID 503 (plymouthd).
Jul 08 11:58:53 vhost03 systemd[1]: Received SIGRTMIN+20 from PID 503 (plymouthd).
Jul 08 11:58:54 vhost03 systemd[1]: Job dev-disk-by\x2duuid-858d885f\x2d4975\x2d4f17\x2da66b\x2d6ce462d2526a.device/start timed out.
Jul 08 11:58:54 vhost03 systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-858d885f\x2d4975\x2d4f17\x2da66b\x2d6ce462d2526a.device.
Jul 08 11:58:54 vhost03 systemd[1]: Dependency failed for /gluster_bricks/gv01_somedomain_com.
Jul 08 11:58:54 vhost03 systemd[1]: Dependency failed for Local File Systems.
Jul 08 11:58:54 vhost03 systemd[1]: Dependency failed for Migrate local SELinux policy changes from the old store structure to the new structure.
Jul 08 11:58:54 vhost03 systemd[1]: Job selinux-policy-migrate-local-changes@targeted.service/start failed with result 'dependency'.
Jul 08 11:58:54 vhost03 systemd[1]: Dependency failed for Relabel all filesystems, if necessary.
Jul 08 11:58:54 vhost03 systemd[1]: Job rhel-autorelabel.service/start failed with result 'dependency'.
Jul 08 11:58:54 vhost03 systemd[1]: Job local-fs.target/start failed with result 'dependency'.
Jul 08 11:58:54 vhost03 systemd[1]: Triggering OnFailure= dependencies of local-fs.target.
Jul 08 11:58:54 vhost03 systemd[1]: Job gluster_bricks-gv01_somedomain_com.mount/start failed with result 'dependency'.
Jul 08 11:58:54 vhost03 systemd[1]: Job dev-disk-by\x2duuid-858d885f\x2d4975\x2d4f17\x2da66b\x2d6ce462d2526a.device/start failed with result 'timeout'.
Jul 08 11:58:54 vhost03 systemd[1]: Job dev-disk-by\x2duuid-8ed1d2df\x2d034b\x2d4c80\x2d9179\x2da44417de6c71.device/start timed out.
Jul 08 11:58:54 vhost03 systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-8ed1d2df\x2d034b\x2d4c80\x2d9179\x2da44417de6c71.device.
Jul 08 11:58:54 vhost03 systemd[1]: Dependency failed for /gluster_bricks/gv01_anotherdomain_com.
Jul 08 11:58:54 vhost03 systemd[1]: Job gluster_bricks-gv01_anotherdomain_com.mount/start failed with result 'dependency'.
Jul 08 11:58:54 vhost03 systemd[1]: Job dev-disk-by\x2duuid-8ed1d2df\x2d034b\x2d4c80\x2d9179\x2da44417de6c71.device/start failed with result 'timeout'.
Jul 08 11:58:54 vhost03 systemd[1]: Job dev-disk-by\x2duuid-cb13dff9\x2deead\x2d4747\x2da3a2\x2d97257882510b.device/start timed out.
Jul 08 11:58:54 vhost03 systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-cb13dff9\x2deead\x2d4747\x2da3a2\x2d97257882510b.device.
Jul 08 11:58:54 vhost03 systemd[1]: Dependency failed for /gluster_bricks/export2.
Jul 08 11:58:54 vhost03 systemd[1]: Job gluster_bricks-export2.mount/start failed with result 'dependency'.
Jul 08 11:58:54 vhost03 systemd[1]: Job dev-disk-by\x2duuid-cb13dff9\x2deead\x2d4747\x2da3a2\x2d97257882510b.device/start failed with result 'timeout'.
Jul 08 11:58:54 vhost03 systemd[1]: Job dev-disk-by\x2duuid-3f46c0f2\x2df64f\x2d4bec\x2d8af3\x2da146f8a503cd.device/start timed out.
Jul 08 11:58:54 vhost03 systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-3f46c0f2\x2df64f\x2d4bec\x2d8af3\x2da146f8a503cd.device.
Jul 08 11:58:54 vhost03 systemd[1]: Dependency failed for /gluster_bricks/gv01_newdomain_com.
Jul 08 11:58

[ovirt-users] Re: Cannot import a qcow2 image

2018-07-08 Thread Yedidyah Bar David
On Fri, Jul 6, 2018 at 9:35 AM, 
wrote:

> From a user point of view ...
>
> Letsencrypt or another certificate authority ... it should not matter...
>
> Just having one set of files ( cer/key/ca-chain) with a clear name
> referenced from "all config files" would be the easiest...
>

Please realize that the engine CA is _mainly_ used to sign hosts' keys.
We do not want to let the user do this with a 3rd party (well, until we
fix bz 1134219 , see
my other reply). Signing all the other keys is only
done "because we can" :-), to simplify things by default.


>
> Once you get the certs from you provider, you just overwrite the files
> with your own , restart the services and "that's it" ;-)
>

That's the one-line summary of:

https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/


or at least that's the intention.


>
> Letsencrypt renewing does not have to be handled on ovirt host  (on a
> bastion host where LE is configured,  a simple script can be run to update
> the certs and restart the services...)
>

Indeed.


>
> My 0.02€
> Etienne
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/QJIAZ25JQYO76OI5T3CAS2E4CKLS2LMU/
>



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M7ZRAVGEZZLFDO6DH2P6C4YDWG6DGZA3/


[ovirt-users] Re: Cannot import a qcow2 image

2018-07-08 Thread Yedidyah Bar David
On Thu, Jul 5, 2018 at 5:20 PM, Nir Soffer  wrote:

> On Thu, Jul 5, 2018 at 4:55 PM 
> wrote:
>
>> Thanks a lot for your support!
>>
>> A reinstalled a fresh ovirt-engine and managed to import the certificate.
>>
>> A  managed to upload an image even with the self signed  certificates
>> configured.
>>
>> I think a "simple" way to allow letsencrypt certificates to be used for
>> "external access" web UI, API..; could be useful
>>
>
> I agree.
>
> Didi, can we integrate with letsencrypt to have engine/imageio certificates
> respected by browsers without additional configuration?
>

I never looked specifically at this. We do have these open bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1336873
https://bugzilla.redhat.com/show_bug.cgi?id=1134219

If we want to specifically handle LE, please open a bug. Not sure we should.


> The need to import the CA into your browser is to upload images is a big
> user
> experience issue. We see users failing to do it again and again.
>

I guess we have here two different issues:

1. By default, we (by default) generate a different key/cert pair for
imageio,
rather than use the one for httpd. So a user accepting the cert for httpd
still
fails to use the cert for imageio, until it's accepted as well. Perhaps we
should
use by default the same pair? No idea why we decided to use a separate pair.
Please open an RFE to use the same pair as httpd.

2. The procedure to use a 3rd-party CA does not mention imageio. That's
already
discussed earlier in this thread.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XZ4IO3FP2NFCKAQG5D4EMFZLQGHGHCDP/