On Sat, 7 Jul 2018 16:28:49 +0200
Gianluca Cecchi wrote:
> Hello,
> I'm testing a virtual rhcs cluster based on 4 nodes that are CentOS
> 7.4 VMs. So the stack is based on Corosync/Pacemaker
> I have two oVirt hosts and so my plan is to put two VMs on first host
> and two VMs on the second host,
Thanks for sharing!
On Sun, Jul 8, 2018 at 1:42 PM, Hesham Ahmed wrote:
> Apparently this is known default behavior of LVM snapshots and in that
> case maybe Cockpit in oVirt node should create mountpoints using
> /dev/mapper path instead of UUID by default. The timeout issue
> persists even aft
On Sun, Jul 8, 2018 at 1:16 PM, Martin Perina wrote:
Hi, thanks for your previous answers
- UserRole ok
>> Back in April 2017 for version 4.1.1 I had problems and it seems I had to
>> set super user privileges for the "fencing user"
>> See thread here
>> https://lists.ovirt.org/archives/list/use
Thanks a lot.
Le dim. 8 juil. 2018 09:14, Maor Lipchuk a écrit :
> cc'ing Denis and Sahina,
> Perhaps they can share their expirience and insights with the hyperconverged
> environment.
>
> Regards,
> Maor
>
> On Fri, Jul 6, 2018 at 9:47 AM, Tal Bar-Or wrote:
>
>> Hello All,
>> I am about deplo
Hi,
Red Hat has a upstream first policy meaning that in terms of code there is
no difference.
The main differences of a RHV subscription are:
1. Longer support life cycle and ELS/EUS options with a security first
priorities.
2. Access to support with different levels on SLA options.
3
On Thu, Jul 5, 2018 at 12:36 PM, wrote:
> Hello,
> as part of our policy I have to change from LDAP to Active
> Directory for authentication in our oVirt system.
Hmm, do I understand that correctly that you were moving oVirt users from
some other LDAP server to AD? Any reason other th
On Sat, Jul 7, 2018 at 5:14 PM, Gianluca Cecchi
wrote:
> Hello,
> I'm configuring a virtual rhcs cluster and would like to use fence_rhevm
> agent for stonith.
> As VMs composing the 4-nodes cluster I'm using CentOS 7.4 os
> with fence-agents-rhevm-4.0.11-66.el7_4.4.x86_64; I see that in 7.5 the
On Sat, Jul 7, 2018 at 8:45 AM, Jim Kusznir wrote:
> So, I'm still at a loss...It sounds like its either insufficient ram/swap,
> or insufficient network. It seems to be neither now. At this point, it
> appears that gluster is just "broke" and killing my systems for no
> descernable reason. He
Apparently this is known default behavior of LVM snapshots and in that
case maybe Cockpit in oVirt node should create mountpoints using
/dev/mapper path instead of UUID by default. The timeout issue
persists even after switching to /dev/mapper/devices in fstab
On Sun, Jul 8, 2018 at 12:59 PM Hesham
I also noticed that Gluster Snapshots have the SAME UUID as the main
LV and if using UUID in fstab, the snapshot device is sometimes
mounted instead of the primary LV
For instance:
/etc/fstab contains the following line:
UUID=a0b85d33-7150-448a-9a70-6391750b90ad /gluster_bricks/gv01_data01
auto i
I am facing this trouble since version 4.1 up to the latest 4.2.4, once
we enable gluster snapshots and accumulate some snapshots (as few as 15
snapshots per server) we start having trouble booting the server. The
server enters emergency shell upon boot after timing out waiting for
snapshot devices
On Fri, Jul 6, 2018 at 9:35 AM,
wrote:
> From a user point of view ...
>
> Letsencrypt or another certificate authority ... it should not matter...
>
> Just having one set of files ( cer/key/ca-chain) with a clear name
> referenced from "all config files" would be the easiest...
>
Please realize
On Thu, Jul 5, 2018 at 5:20 PM, Nir Soffer wrote:
> On Thu, Jul 5, 2018 at 4:55 PM
> wrote:
>
>> Thanks a lot for your support!
>>
>> A reinstalled a fresh ovirt-engine and managed to import the certificate.
>>
>> A managed to upload an image even with the self signed certificates
>> configure
13 matches
Mail list logo