Re: [ovirt-users] Hardware critique

2018-04-05 Thread Vincent Royer
Well good, we can at least bounce ideas off each other, and I'm sure we'll
get some good advice sooner or later! Best way to get good ideas on the
internet is to post bad ones and wait ;)

In the performance and sizing guide PDF, they make this statement:

*Standard servers with 4:2 erasure coding and JBOD bricks provide superior
value in terms of throughput *
*per server cost.*

I don't know enough about erasure coding to know if that applies, but all
the graphs that involve JBOD in that document look pretty good to me.

What do you know about ZFS?  Seems to be popular to use it in combination
with Gluster.


On Thu, Apr 5, 2018, 2:38 PM Jayme,  wrote:

> Vincent,
>
> I've been back and forth on SSDs vs HDDs and can't really get a clear
> answer.  You are correct though, it would only equal 4TB usable in the end
> which is pretty crazy but that amount of 7200 RPM HDDs equals about the
> same cost as 3 2TB ssds would.  I actually posted a question to this list
> not long ago asking how GlusterFS might perform with a small amount of
> disks such as one 2TB SSD per host and some glusterFS users commented
> stating that network would be the bottleneck long before the disks and a
> small amount of SSDs could bottleneck at RPC layer.   Also, I believe at
> this time GlusterFS is not exactly developed to take full advantage of SSDs
> (but I believe there has been strides being made in that regard, I could be
> wrong here).
>
> As for replica 3 being overkill that may be true as well but from what
> I've read on Ovirt and GlusterFS list archives people typically feel safer
> with replica 3 and run in to less disaster scenarios and can provide easier
> recovery.  I'm not sold on Replica 3 either, Rep 3 Arbiter 1 may be more
> than fine but I wanted to err on the side of caution as this setup may host
> production servers sometime in the future.
>
> I really wish I could get some straight answers on best configuration for
> Ovirt + GlusterFS but thus far it has been a big question mark.  I don't
> know if Raid is better than JBOD and I don't know if a smaller number of
> SSDs would perform any better/worse than larger number of spinning disks in
> raid 10.
>
> On Thu, Apr 5, 2018 at 5:38 PM, Vincent Royer 
> wrote:
>
>> Jayme,
>>
>> I'm doing a very similar build, the only difference really is I am using
>> SSDs instead of HDDs.   I have similar questions as you regarding expected
>> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
>> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
>> massive hit.  Am I correct in saying you will only get 4TB total usable
>> capacity out of 24TB worth of disks?  The cost per TB in that sort of
>> scenario is immense.
>>
>> My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
>> replica 3.  I would end up with the same 4TB total capacity using 12TB of
>> SSDs.
>>
>> I think Replica 3 is safe enough that you could forgo the RAID 10. But
>> I'm talking from zero experience...  Would love others to chime in with
>> their opinions on both these setups.
>>
>> *Vincent Royer*
>> *778-825-1057*
>>
>>
>> 
>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>>
>>
>>
>>
>> On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:
>>
>>> Thanks for your feedback.  Any other opinions on this proposed setup?
>>> I'm very torn over using GlusterFS and what the expected performance may
>>> be, there seems to be little information out there.  Would love to hear any
>>> feedback specifically from ovirt users on hyperconverged configurations.
>>>
>>> On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:
>>>
 Hi,

 You should be ok with the setup.
 I am running around 20 vms (linux and windows, small and medium size)
 with the half of your specs. With 10G network replica 3 is ok.

 Alex

 On Wed, Apr 4, 2018, 16:13 Jayme  wrote:

> I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
> budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
> couple of heavier hitting web and DB servers with frequent rsync backups.
> Some have a lot of small files from large github repos etc.
>
> 3X of the following:
>
> Dell PowerEdge R720
> 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
> 256GB RAM
> PERC H710
> 2x10GB Nic
>
> Boot/OS will likely be two cheaper small sata/ssd in raid 1.
>
> Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
> per server.  Using a replica 3 setup (and I'm thinking right now with no
> arbiter for extra redundancy, although I'm not sure what the performance
> hit may be as a result).  Will this allow for two host failure or just 
> one?
>
> I've been really struggling with storage choices, it seems very
> difficult to predict the performance of 

[ovirt-users] Greetings oVirt Users

2018-04-05 Thread Clint Boggio
Environment Rundown:

OVirt 4.2
6 CentOS 7.4 Compute Nodes Intel Xeon
1 CentOS 7.4 Dedicated Engine Node Intel Xeon 
1 Datacenter 
1 Storage Domain
1 Cluster
10Gig-E iSCSI Storage 
10Gig-E NFS Export Domain
20 VM’s of various OS’s and uses 

The current cluster is using the Nehalem architecture.

I’ve got the deploy two new VMs that the current system will not allow me to 
configure with the Nehalem based cluster, so I’ve got to bump up the 
architecture of the cluster to accommodate them.

Before i shut down all the current VMs to upgrade the cluster, I have some 
questions about the effect this is going to have on the environment.

1. Will all of the current VM’s use the legacy processor architecture or will I 
have to change them ?

2. Can I elevate the cluster processor functionality higher than the underlying 
hardware architecture  ?

3. In regards to the new cluster processor, will all of the processor 
architectures below the one I choose be an option for the existing and future 
VMs ?

I apologize for the long post and I hope that I haven’t left out any vital 
information.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware critique

2018-04-05 Thread Jayme
Vincent,

I've been back and forth on SSDs vs HDDs and can't really get a clear
answer.  You are correct though, it would only equal 4TB usable in the end
which is pretty crazy but that amount of 7200 RPM HDDs equals about the
same cost as 3 2TB ssds would.  I actually posted a question to this list
not long ago asking how GlusterFS might perform with a small amount of
disks such as one 2TB SSD per host and some glusterFS users commented
stating that network would be the bottleneck long before the disks and a
small amount of SSDs could bottleneck at RPC layer.   Also, I believe at
this time GlusterFS is not exactly developed to take full advantage of SSDs
(but I believe there has been strides being made in that regard, I could be
wrong here).

As for replica 3 being overkill that may be true as well but from what I've
read on Ovirt and GlusterFS list archives people typically feel safer with
replica 3 and run in to less disaster scenarios and can provide easier
recovery.  I'm not sold on Replica 3 either, Rep 3 Arbiter 1 may be more
than fine but I wanted to err on the side of caution as this setup may host
production servers sometime in the future.

I really wish I could get some straight answers on best configuration for
Ovirt + GlusterFS but thus far it has been a big question mark.  I don't
know if Raid is better than JBOD and I don't know if a smaller number of
SSDs would perform any better/worse than larger number of spinning disks in
raid 10.

On Thu, Apr 5, 2018 at 5:38 PM, Vincent Royer  wrote:

> Jayme,
>
> I'm doing a very similar build, the only difference really is I am using
> SSDs instead of HDDs.   I have similar questions as you regarding expected
> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
> massive hit.  Am I correct in saying you will only get 4TB total usable
> capacity out of 24TB worth of disks?  The cost per TB in that sort of
> scenario is immense.
>
> My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
> replica 3.  I would end up with the same 4TB total capacity using 12TB of
> SSDs.
>
> I think Replica 3 is safe enough that you could forgo the RAID 10. But I'm
> talking from zero experience...  Would love others to chime in with their
> opinions on both these setups.
>
> *Vincent Royer*
> *778-825-1057*
>
>
> 
> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>
>
>
>
> On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:
>
>> Thanks for your feedback.  Any other opinions on this proposed setup?
>> I'm very torn over using GlusterFS and what the expected performance may
>> be, there seems to be little information out there.  Would love to hear any
>> feedback specifically from ovirt users on hyperconverged configurations.
>>
>> On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:
>>
>>> Hi,
>>>
>>> You should be ok with the setup.
>>> I am running around 20 vms (linux and windows, small and medium size)
>>> with the half of your specs. With 10G network replica 3 is ok.
>>>
>>> Alex
>>>
>>> On Wed, Apr 4, 2018, 16:13 Jayme  wrote:
>>>
 I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
 budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
 couple of heavier hitting web and DB servers with frequent rsync backups.
 Some have a lot of small files from large github repos etc.

 3X of the following:

 Dell PowerEdge R720
 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
 256GB RAM
 PERC H710
 2x10GB Nic

 Boot/OS will likely be two cheaper small sata/ssd in raid 1.

 Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
 per server.  Using a replica 3 setup (and I'm thinking right now with no
 arbiter for extra redundancy, although I'm not sure what the performance
 hit may be as a result).  Will this allow for two host failure or just one?

 I've been really struggling with storage choices, it seems very
 difficult to predict the performance of glusterFS due to the variance in
 hardware (everyone is using something different).  I'm not sure if the
 performance will be adequate enough for my needs.

 I will be using an all ready existing Netgear XS716T 10GB switch for
 Gluster storage network.

 In addition I plan to build another simple glusterFS storage server
 that I can use to georeplicate the gluster volume to for DR purposes and
 use existing hardware to build an independent standby oVirt host that is
 able to start up a few high priority VMs from the georeplicated glusterFS
 volume if for some reason the primary oVirt cluster/glusterFS volume ever
 failed.

 I would love to hear any advice or critiques on this plan.

 Thanks!
 

Re: [ovirt-users] Hardware critique

2018-04-05 Thread FERNANDO FREDIANI
I always found replica 3 a complete overkill. Don't know people made that
up that was necessary. Just looks good and costs a lot with little benefit.

Normally when using magnetic disks 2 copies are fine for most scenarios,
but if using SSDs for similar scenarios depending on the configuration of
each node disks it is possible to have a RAID 5/6 ish.
Fernando

2018-04-05 17:38 GMT-03:00 Vincent Royer :

> Jayme,
>
> I'm doing a very similar build, the only difference really is I am using
> SSDs instead of HDDs.   I have similar questions as you regarding expected
> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
> massive hit.  Am I correct in saying you will only get 4TB total usable
> capacity out of 24TB worth of disks?  The cost per TB in that sort of
> scenario is immense.
>
> My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
> replica 3.  I would end up with the same 4TB total capacity using 12TB of
> SSDs.
>
> I think Replica 3 is safe enough that you could forgo the RAID 10. But I'm
> talking from zero experience...  Would love others to chime in with their
> opinions on both these setups.
>
> *Vincent Royer*
> *778-825-1057*
>
>
> 
> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>
>
>
>
> On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:
>
>> Thanks for your feedback.  Any other opinions on this proposed setup?
>> I'm very torn over using GlusterFS and what the expected performance may
>> be, there seems to be little information out there.  Would love to hear any
>> feedback specifically from ovirt users on hyperconverged configurations.
>>
>> On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:
>>
>>> Hi,
>>>
>>> You should be ok with the setup.
>>> I am running around 20 vms (linux and windows, small and medium size)
>>> with the half of your specs. With 10G network replica 3 is ok.
>>>
>>> Alex
>>>
>>> On Wed, Apr 4, 2018, 16:13 Jayme  wrote:
>>>
 I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
 budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
 couple of heavier hitting web and DB servers with frequent rsync backups.
 Some have a lot of small files from large github repos etc.

 3X of the following:

 Dell PowerEdge R720
 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
 256GB RAM
 PERC H710
 2x10GB Nic

 Boot/OS will likely be two cheaper small sata/ssd in raid 1.

 Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
 per server.  Using a replica 3 setup (and I'm thinking right now with no
 arbiter for extra redundancy, although I'm not sure what the performance
 hit may be as a result).  Will this allow for two host failure or just one?

 I've been really struggling with storage choices, it seems very
 difficult to predict the performance of glusterFS due to the variance in
 hardware (everyone is using something different).  I'm not sure if the
 performance will be adequate enough for my needs.

 I will be using an all ready existing Netgear XS716T 10GB switch for
 Gluster storage network.

 In addition I plan to build another simple glusterFS storage server
 that I can use to georeplicate the gluster volume to for DR purposes and
 use existing hardware to build an independent standby oVirt host that is
 able to start up a few high priority VMs from the georeplicated glusterFS
 volume if for some reason the primary oVirt cluster/glusterFS volume ever
 failed.

 I would love to hear any advice or critiques on this plan.

 Thanks!
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware critique

2018-04-05 Thread Vincent Royer
Jayme,

I'm doing a very similar build, the only difference really is I am using
SSDs instead of HDDs.   I have similar questions as you regarding expected
performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
massive hit.  Am I correct in saying you will only get 4TB total usable
capacity out of 24TB worth of disks?  The cost per TB in that sort of
scenario is immense.

My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
replica 3.  I would end up with the same 4TB total capacity using 12TB of
SSDs.

I think Replica 3 is safe enough that you could forgo the RAID 10. But I'm
talking from zero experience...  Would love others to chime in with their
opinions on both these setups.

*Vincent Royer*
*778-825-1057*



*SUSTAINABLE MOBILE ENERGY SOLUTIONS*




On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:

> Thanks for your feedback.  Any other opinions on this proposed setup?  I'm
> very torn over using GlusterFS and what the expected performance may be,
> there seems to be little information out there.  Would love to hear any
> feedback specifically from ovirt users on hyperconverged configurations.
>
> On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:
>
>> Hi,
>>
>> You should be ok with the setup.
>> I am running around 20 vms (linux and windows, small and medium size)
>> with the half of your specs. With 10G network replica 3 is ok.
>>
>> Alex
>>
>> On Wed, Apr 4, 2018, 16:13 Jayme  wrote:
>>
>>> I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
>>> budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
>>> couple of heavier hitting web and DB servers with frequent rsync backups.
>>> Some have a lot of small files from large github repos etc.
>>>
>>> 3X of the following:
>>>
>>> Dell PowerEdge R720
>>> 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
>>> 256GB RAM
>>> PERC H710
>>> 2x10GB Nic
>>>
>>> Boot/OS will likely be two cheaper small sata/ssd in raid 1.
>>>
>>> Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
>>> per server.  Using a replica 3 setup (and I'm thinking right now with no
>>> arbiter for extra redundancy, although I'm not sure what the performance
>>> hit may be as a result).  Will this allow for two host failure or just one?
>>>
>>> I've been really struggling with storage choices, it seems very
>>> difficult to predict the performance of glusterFS due to the variance in
>>> hardware (everyone is using something different).  I'm not sure if the
>>> performance will be adequate enough for my needs.
>>>
>>> I will be using an all ready existing Netgear XS716T 10GB switch for
>>> Gluster storage network.
>>>
>>> In addition I plan to build another simple glusterFS storage server that
>>> I can use to georeplicate the gluster volume to for DR purposes and use
>>> existing hardware to build an independent standby oVirt host that is able
>>> to start up a few high priority VMs from the georeplicated glusterFS volume
>>> if for some reason the primary oVirt cluster/glusterFS volume ever failed.
>>>
>>> I would love to hear any advice or critiques on this plan.
>>>
>>> Thanks!
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware critique

2018-04-05 Thread Jayme
Thanks for your feedback.  Any other opinions on this proposed setup?  I'm
very torn over using GlusterFS and what the expected performance may be,
there seems to be little information out there.  Would love to hear any
feedback specifically from ovirt users on hyperconverged configurations.

On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:

> Hi,
>
> You should be ok with the setup.
> I am running around 20 vms (linux and windows, small and medium size) with
> the half of your specs. With 10G network replica 3 is ok.
>
> Alex
>
> On Wed, Apr 4, 2018, 16:13 Jayme  wrote:
>
>> I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
>> budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
>> couple of heavier hitting web and DB servers with frequent rsync backups.
>> Some have a lot of small files from large github repos etc.
>>
>> 3X of the following:
>>
>> Dell PowerEdge R720
>> 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
>> 256GB RAM
>> PERC H710
>> 2x10GB Nic
>>
>> Boot/OS will likely be two cheaper small sata/ssd in raid 1.
>>
>> Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
>> per server.  Using a replica 3 setup (and I'm thinking right now with no
>> arbiter for extra redundancy, although I'm not sure what the performance
>> hit may be as a result).  Will this allow for two host failure or just one?
>>
>> I've been really struggling with storage choices, it seems very difficult
>> to predict the performance of glusterFS due to the variance in hardware
>> (everyone is using something different).  I'm not sure if the performance
>> will be adequate enough for my needs.
>>
>> I will be using an all ready existing Netgear XS716T 10GB switch for
>> Gluster storage network.
>>
>> In addition I plan to build another simple glusterFS storage server that
>> I can use to georeplicate the gluster volume to for DR purposes and use
>> existing hardware to build an independent standby oVirt host that is able
>> to start up a few high priority VMs from the georeplicated glusterFS volume
>> if for some reason the primary oVirt cluster/glusterFS volume ever failed.
>>
>> I would love to hear any advice or critiques on this plan.
>>
>> Thanks!
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is compatibility level change required for upgrading from 4.0 to 4.2?

2018-04-05 Thread Yaniv Kaul
On Thu, Apr 5, 2018, 5:31 PM Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> Hello,
>
> we're planning an upgrade of an old 4.0 setup to 4.2, going through 4.1.
>
> What we found out is that when upgrading from major to major, cluster
> and datacenter compatibility upgrade has to be done at the end of the
> upgrade.
> This means that we also require to restart our VMs for adapting the
> compatibility level.
>
> Do we require to upgrade the compatibility level to 4.1 before
> starting the upgrade to 4.2?
>

No.
Y.


> We have more than 400 vms and only few of them can be restarted at our
> convenience.
>
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Why RAW images when using GlusterFS?

2018-04-05 Thread Yaniv Kaul
On Thu, Apr 5, 2018, 2:33 PM Nicolas Ecarnot  wrote:

> Hello,
>
> Amongst others, I have one 3.6 DC working very well since years and all
> based on GlusterFS.
> When having a close look (qemu-img info) on the images, I see their
> format is all RAW and not QCOW2.
>

Raw sparse.


> I never noticed or bothered before, but I'm wondering :
> - is it by design?
>

Yes, for performance reasons.

- it is something we can change (I'd prefer qcow2)
>

No.

- it there some limitations?
>

Not that I know.


> And finally, I have the same questions about NFS storage domains.
>

Same as Gluster, and other file-based storage.
Y.


> Thank you.
>
> --
> Nicolas ECARNOT
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Engine reports

2018-04-05 Thread Peter Hudec
Since it's still not installed, yes ;)

On 05/04/2018 16:11, Rich Megginson wrote:
> Is it possible that you could start over from scratch, using the latest
> instructions/files at
> https://github.com/ViaQ/Main/pull/37/files?
> 
> On 04/05/2018 07:19 AM, Peter Hudec wrote:
>> The version is from
>>
>> /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py:get_openshift_version
>>
>>
>> [PROD] r...@dipostat01.cnc.sk: /usr/share/ansible/openshift-ansible #
>> /usr/bin/openshift version
>> openshift v3.10.0-alpha.0+f0186dd-401
>> kubernetes v1.9.1+a0ce1bc657
>> etcd 3.2.16
>>
>> this binary is from the origin-3.7.2-1.el7.git.0.f0186dd.x86_64 package
>>
>> [PROD] r...@dipostat01.cnc.sk: ~ # rpm -qf /usr/bin/openshift
>> origin-3.7.2-1.el7.git.0.f0186dd.x86_64
>>
>> Hmm, why the version do not match ?
>>
>> Peter
>>
>> On 04/04/2018 17:41, Shirly Radco wrote:
>>> Hi,
>>>
>>>
>>> I have a updated the installation instructions for installing OpenShift
>>> 3.9, based on Rich's work , for the oVirt use case.
>>>
>>> Please make sure where you get the 3.10 rpm and disable that repo.
>>>
>>> This is the PR that includes the metrics store installation of
>>> OpenShift 3.9
>>>
>>> https://github.com/sradco/ovirt-site/blob/74f1e772c8ca75d4b9e57a3c02cce49c5030f7f7/source/develop/release-management/features/metrics/setting-up-viaq-logging.html.md
>>>
>>>
>>> It should be merged soon, but you can use it to install.
>>>
>>>
>>> Please make sure to add the ansible-inventory-origin-39-aio file as
>>> described below.
>>> These are required parameters for the ansible playbook.
>>>
>>>
>>> Best regards,
>>>
>>> -- 
>>>
>>> SHIRLY RADCO
>>>
>>> BI SeNIOR SOFTWARE ENGINEER
>>>
>>> Red Hat Israel 
>>>
>>>    
>>> TRIED. TESTED. TRUSTED. 
>>>
>>>
>>> On Wed, Apr 4, 2018 at 5:41 PM, Rich Megginson >> > wrote:
>>>
>>>  I'm sorry.  I misunderstood the request.
>>>
>>>  We are in the process of updating the instructions for installing
>>>  viaq logging based on upstream origin 3.9 -
>>>  https://github.com/ViaQ/Main/pull/37
>>>  
>>>
>>>  In the meantime, you can follow along on that PR, and we will have
>>>  instructions very soon.
>>>
>>>
>>>  On 04/04/2018 08:26 AM, Rich Megginson wrote:
>>>
>>>  On 04/04/2018 08:22 AM, Shirly Radco wrote:
>>>
>>>
>>>
>>>  --
>>>
>>>  SHIRLY RADCO
>>>
>>>  BI SeNIOR SOFTWARE ENGINEER
>>>
>>>  Red Hat Israel 
>>>
>>>  
>>>  TRIED. TESTED. TRUSTED. 
>>>
>>>
>>>  On Wed, Apr 4, 2018 at 5:07 PM, Peter Hudec >>   >>  >> wrote:
>>>
>>>      almost the same issue, the versio for openshoft release
>>>  changed to 3.9
>>>
>>>      Failure summary:
>>>
>>>
>>>        1. Hosts:    localhost
>>>           Play:     Determine openshift_version to configure
>>>  on first
>>>      master
>>>           Task:     For an RPM install, abort when the
>>> release
>>>      requested does
>>>      not match the available version.
>>>           Message:  You requested openshift_release 3.9,
>>>  which is not
>>>      matched by
>>>                     the latest OpenShift RPM we detected as
>>>  origin-3.10.0
>>>                     on host localhost.
>>>                     We will only install the latest RPMs, so
>>>  please ensure
>>>      you are getting the release
>>>                     you expect. You may need to adjust your
>>>  Ansible
>>>      inventory, modify the repositories
>>>                     available on the host, or run the
>>>  appropriate OpenShift
>>>      upgrade playbook.
>>>
>>>
>>>  What is the rpm version installed of OpenShift origin?
>>>  Do you perhaps have another repo that includes the
>>> OpenShift
>>>  origin-3.10.0?
>>>
>>>
>>>  origin is the upstream version of OpenShift, which is
>>> unsupported.
>>>
>>>  sudo repoquery -i origin
>>>
>>>  This will tell you which repo it came from
>>>
>>>  You will need to disable that repo and uninstall any packages
>>>  which came from that repo
>>>
>>>  Another mystery is why it is trying to install origin in the
>>>  first place.  openshift-ansible with
>>>  openshift_deployment_type=openshift-enterprise should not
>>> try to
>>>  install origin, it 

[ovirt-users] Is compatibility level change required for upgrading from 4.0 to 4.2?

2018-04-05 Thread Luca 'remix_tj' Lorenzetto
Hello,

we're planning an upgrade of an old 4.0 setup to 4.2, going through 4.1.

What we found out is that when upgrading from major to major, cluster
and datacenter compatibility upgrade has to be done at the end of the
upgrade.
This means that we also require to restart our VMs for adapting the
compatibility level.

Do we require to upgrade the compatibility level to 4.1 before
starting the upgrade to 4.2?

We have more than 400 vms and only few of them can be restarted at our
convenience.


-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Engine reports

2018-04-05 Thread Rich Megginson
Is it possible that you could start over from scratch, using the latest 
instructions/files at

https://github.com/ViaQ/Main/pull/37/files?

On 04/05/2018 07:19 AM, Peter Hudec wrote:

The version is from

/usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py:get_openshift_version

[PROD] r...@dipostat01.cnc.sk: /usr/share/ansible/openshift-ansible #
/usr/bin/openshift version
openshift v3.10.0-alpha.0+f0186dd-401
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.16

this binary is from the origin-3.7.2-1.el7.git.0.f0186dd.x86_64 package

[PROD] r...@dipostat01.cnc.sk: ~ # rpm -qf /usr/bin/openshift
origin-3.7.2-1.el7.git.0.f0186dd.x86_64

Hmm, why the version do not match ?

Peter

On 04/04/2018 17:41, Shirly Radco wrote:

Hi,


I have a updated the installation instructions for installing OpenShift
3.9, based on Rich's work , for the oVirt use case.

Please make sure where you get the 3.10 rpm and disable that repo.

This is the PR that includes the metrics store installation of OpenShift 3.9

https://github.com/sradco/ovirt-site/blob/74f1e772c8ca75d4b9e57a3c02cce49c5030f7f7/source/develop/release-management/features/metrics/setting-up-viaq-logging.html.md

It should be merged soon, but you can use it to install.


Please make sure to add the ansible-inventory-origin-39-aio file as
described below.
These are required parameters for the ansible playbook.


Best regards,

--

SHIRLY RADCO

BI SeNIOR SOFTWARE ENGINEER

Red Hat Israel 

  
TRIED. TESTED. TRUSTED. 


On Wed, Apr 4, 2018 at 5:41 PM, Rich Megginson > wrote:

 I'm sorry.  I misunderstood the request.

 We are in the process of updating the instructions for installing
 viaq logging based on upstream origin 3.9 -
 https://github.com/ViaQ/Main/pull/37
 

 In the meantime, you can follow along on that PR, and we will have
 instructions very soon.


 On 04/04/2018 08:26 AM, Rich Megginson wrote:

 On 04/04/2018 08:22 AM, Shirly Radco wrote:



 --

 SHIRLY RADCO

 BI SeNIOR SOFTWARE ENGINEER

 Red Hat Israel 

 
 TRIED. TESTED. TRUSTED. 


 On Wed, Apr 4, 2018 at 5:07 PM, Peter Hudec  >> wrote:

     almost the same issue, the versio for openshoft release
 changed to 3.9

     Failure summary:


       1. Hosts:    localhost
          Play:     Determine openshift_version to configure
 on first
     master
          Task:     For an RPM install, abort when the release
     requested does
     not match the available version.
          Message:  You requested openshift_release 3.9,
 which is not
     matched by
                    the latest OpenShift RPM we detected as
 origin-3.10.0
                    on host localhost.
                    We will only install the latest RPMs, so
 please ensure
     you are getting the release
                    you expect. You may need to adjust your
 Ansible
     inventory, modify the repositories
                    available on the host, or run the
 appropriate OpenShift
     upgrade playbook.


 What is the rpm version installed of OpenShift origin?
 Do you perhaps have another repo that includes the OpenShift
 origin-3.10.0?


 origin is the upstream version of OpenShift, which is unsupported.

 sudo repoquery -i origin

 This will tell you which repo it came from

 You will need to disable that repo and uninstall any packages
 which came from that repo

 Another mystery is why it is trying to install origin in the
 first place.  openshift-ansible with
 openshift_deployment_type=openshift-enterprise should not try to
 install origin, it should install packages with names like
 atomic-openshift-master, atomic-openshift-node, etc.




     On 04/04/2018 14:18, Shirly Radco wrote:
     > Adding Nicolas since he hit the same issue.
     >
     > --
     >
     > SHIRLY RADCO
     >
     > BI SeNIOR SOFTWARE ENGINEER
     >
     > Red Hat Israel 
     >
     > 
     > TRIED. TESTED. TRUSTED. 
 

[ovirt-users] Updates to oVirt 4.2.2

2018-04-05 Thread Sandro Bonazzola
Hi,

the oVirt team released today April 5th an update to oVirt 4.2.2 including
the following packages:
- ovirt-hosted-engine-ha-2.2.10
- ovirt-hosted-engine-setup-2.2.16
- cockpit-ovirt-0.11.20-1
- ovirt-release42-4.2.2-3

Addressing the following issues:
 - [BZ 1560666 ] - Hosted Engine VM
(deployed in the past) fails to reboot with 'libvirtError: internal error:
failed to format device alias for PTY retrieval' due to an error in console
device in libvirt XML generated by the engine

 - [BZ 1560551 ] - The user is asked
twice to enter engine VM FQDN
 - [BZ 1560655 ] - Node 0 flow is
consuming the portal IP used for the discovery.
 - [BZ 1562349 ] - Upgrade appliance
failed with 'Environment customization': 'OVEHOSTED_STORAGE/spUUID'
 - [BZ 1563664 ] - Ansible deployments
fails on NFS with root_squash option

 - [BZ 1562011 ] - Ansible: Deploy HE
failed with FC storage via cockpit.

- Switch to signed RPMs yum repository for CentOS Virt SIG on ppc64le

An update for oVirt Node is already available. An updated oVirt Node ISO is
being composed right now and will be available for download by tomorrow
morning.


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Engine reports

2018-04-05 Thread Peter Hudec
The version is from

/usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py:get_openshift_version

[PROD] r...@dipostat01.cnc.sk: /usr/share/ansible/openshift-ansible #
/usr/bin/openshift version
openshift v3.10.0-alpha.0+f0186dd-401
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.16

this binary is from the origin-3.7.2-1.el7.git.0.f0186dd.x86_64 package

[PROD] r...@dipostat01.cnc.sk: ~ # rpm -qf /usr/bin/openshift
origin-3.7.2-1.el7.git.0.f0186dd.x86_64

Hmm, why the version do not match ?

Peter

On 04/04/2018 17:41, Shirly Radco wrote:
> Hi,
> 
> 
> I have a updated the installation instructions for installing OpenShift
> 3.9, based on Rich's work , for the oVirt use case.
> 
> Please make sure where you get the 3.10 rpm and disable that repo.
> 
> This is the PR that includes the metrics store installation of OpenShift 3.9
> 
> https://github.com/sradco/ovirt-site/blob/74f1e772c8ca75d4b9e57a3c02cce49c5030f7f7/source/develop/release-management/features/metrics/setting-up-viaq-logging.html.md
> 
> It should be merged soon, but you can use it to install.
> 
> 
> Please make sure to add the ansible-inventory-origin-39-aio file as
> described below.
> These are required parameters for the ansible playbook.
> 
> 
> Best regards,
> 
> --
> 
> SHIRLY RADCO
> 
> BI SeNIOR SOFTWARE ENGINEER
> 
> Red Hat Israel 
> 
>   
> TRIED. TESTED. TRUSTED. 
> 
> 
> On Wed, Apr 4, 2018 at 5:41 PM, Rich Megginson  > wrote:
> 
> I'm sorry.  I misunderstood the request.
> 
> We are in the process of updating the instructions for installing
> viaq logging based on upstream origin 3.9 -
> https://github.com/ViaQ/Main/pull/37
> 
> 
> In the meantime, you can follow along on that PR, and we will have
> instructions very soon.
> 
> 
> On 04/04/2018 08:26 AM, Rich Megginson wrote:
> 
> On 04/04/2018 08:22 AM, Shirly Radco wrote:
> 
> 
> 
> -- 
> 
> SHIRLY RADCO
> 
> BI SeNIOR SOFTWARE ENGINEER
> 
> Red Hat Israel 
> 
> 
> TRIED. TESTED. TRUSTED. 
> 
> 
> On Wed, Apr 4, 2018 at 5:07 PM, Peter Hudec    >> wrote:
> 
>     almost the same issue, the versio for openshoft release
> changed to 3.9
> 
>     Failure summary:
> 
> 
>       1. Hosts:    localhost
>          Play:     Determine openshift_version to configure
> on first
>     master
>          Task:     For an RPM install, abort when the release
>     requested does
>     not match the available version.
>          Message:  You requested openshift_release 3.9,
> which is not
>     matched by
>                    the latest OpenShift RPM we detected as
> origin-3.10.0
>                    on host localhost.
>                    We will only install the latest RPMs, so
> please ensure
>     you are getting the release
>                    you expect. You may need to adjust your
> Ansible
>     inventory, modify the repositories
>                    available on the host, or run the
> appropriate OpenShift
>     upgrade playbook.
> 
> 
> What is the rpm version installed of OpenShift origin?
> Do you perhaps have another repo that includes the OpenShift
> origin-3.10.0?
> 
> 
> origin is the upstream version of OpenShift, which is unsupported.
> 
> sudo repoquery -i origin
> 
> This will tell you which repo it came from
> 
> You will need to disable that repo and uninstall any packages
> which came from that repo
> 
> Another mystery is why it is trying to install origin in the
> first place.  openshift-ansible with
> openshift_deployment_type=openshift-enterprise should not try to
> install origin, it should install packages with names like
> atomic-openshift-master, atomic-openshift-node, etc.
> 
> 
> 
> 
>     On 04/04/2018 14:18, Shirly Radco wrote:
>     > Adding Nicolas since he hit the same issue.
>     >
>     > --
>     >
>     > SHIRLY RADCO
>     >
>     > BI SeNIOR SOFTWARE ENGINEER
>     >
>     > Red Hat Israel 
>     >
>     > 
>     > TRIED. TESTED. TRUSTED. 

Re: [ovirt-users] oVirt non-self-hosted HA

2018-04-05 Thread Tom


Sent from my iPhone

> On Apr 5, 2018, at 5:29 AM, Yaniv Kaul  wrote:
> 
> 
> 
>> On Thu, Apr 5, 2018 at 9:08 AM, TomK  wrote:
>>> On 4/4/2018 3:11 AM, Yaniv Kaul wrote:
>>> 
>>> 
>>> On Wed, Apr 4, 2018 at 12:39 AM, Tom >> > wrote:
>>> 
>>> 
>>> 
>>> Sent from my iPhone
>>> 
>>> On Apr 3, 2018, at 9:32 AM, Yaniv Kaul >> > wrote:
>>> 
 
 
 On Tue, Apr 3, 2018 at 3:12 PM, TomK > wrote:
 
 Hey Guy's,
 
 If I'm looking to setup the oVirt engine in an HA
 configuration off the physical servers hosting my VM's (non
 self hosted), what are my options here?
 
 I want to setup two to four active oVirt engine instances
 elsewhere and handle the HA via something like haproxy /
 keepalived to keep the entire experience seamless to the user.
 
 
 You will need to set up the oVirt engine service as well as the PG
 database (and ovirt-engine-dwhd service and any other service we
 run next to the engine) as highly available module.
 In pacemaker[1], for example.
 You'll need to ensure configuration is also sync'ed between nodes,
 etc.
 Y.
>>> 
>>> So already have one ovirt engine setup separately on a vm that
>>> manages two remote physical hosts.  So familiar with the single host
>>> approach which I would simply replicate.  At least that’s the idea
>>> anyway.  Could you please expand a bit on the highly available
>>> module and  syncing the config between hosts?
>>> 
>>> 
>>> That's a different strategy, which is also legit - you treat this VM as a 
>>> highly available resource. Now you do not need to sync the config - just 
>>> the VM disk and config.
>> 
>> I think there's a postgres component too and if oVirt engine keeps all it's 
>> date on the postgres tables, then synchronizing this piece might be all I 
>> need?  I'm not sure how the separate oVirt engines sitting on various 
>> separate physical hosts keep their settings in sync about the rest of the 
>> physicals in an oVirt environment. (Assume we may have 100 oVirt physicals 
>> for example.)
> 
> There's more than just the database, although it contains 99% of what you 
> need. See the content of the result of 'engine-backup' command.
> I think you might be somewhat confusing between the number of oVirt 
> hypervisors (we support hundreds) and the Engine - the management, which is 
> single - and with hosted-engine, it's a single, but highly available virtual 
> machine - that can run on one of several (I suggest 3-8) of those hypervisors.
> Y.

Yah, still very new to much of this.  Thank you again.

I’ll take that away and do some reading.  

Cheers,
Tom

>  
>> 
>> 
>>> Perhaps something like 
>>> https://www.unixarena.com/2015/12/rhel-7-pacemaker-configuring-ha-kvm-guest.html
>>>  .
>>> 
>>> But if you are already doing that, I'm not sure why you'd prefer this over 
>>> hosted-engine setup.
>> 
>> I'm comparing both options.  I really don't want to ask too many specific 
>> until I have the chance to read into the details of both.
>> 
>>> Y.
>> 
>> Cheers,
>> Tom
>> 
>>> 
>>> 
>>> Cheers,
>>> Tom
>>> 
 
 [1] https://clusterlabs.org/quickstart-redhat.html
 
 
 
 From what I've seen in oVirt, that seems to be possible
 without the two oVirt engines even knowing each other's
 existence but is it something anyone has ever done?  Any
 recommendations in this case?
 
 Having settings replicated would be a bonus but I would be
 comfortable if they weren't and I handle that myself.
 
 -- Cheers,
 Tom K.
 
 -
 
 Living on earth is expensive, but it includes a free trip
 around the sun.
 
 ___
 Users mailing list
 Users@ovirt.org 
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 
>>> 
>> 
>> 
>> -- 
>> Cheers,
>> Tom K.
>> -
>> 
>> Living on earth is expensive, but it includes a free trip around the sun.
>> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Why RAW images when using GlusterFS?

2018-04-05 Thread Nicolas Ecarnot

Hello,

Amongst others, I have one 3.6 DC working very well since years and all 
based on GlusterFS.
When having a close look (qemu-img info) on the images, I see their 
format is all RAW and not QCOW2.


I never noticed or bothered before, but I'm wondering :
- is it by design?
- it is something we can change (I'd prefer qcow2)
- it there some limitations?

And finally, I have the same questions about NFS storage domains.

Thank you.

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt non-self-hosted HA

2018-04-05 Thread Yaniv Kaul
On Thu, Apr 5, 2018 at 9:08 AM, TomK  wrote:

> On 4/4/2018 3:11 AM, Yaniv Kaul wrote:
>
>>
>>
>> On Wed, Apr 4, 2018 at 12:39 AM, Tom  t...@mdevsys.com>> wrote:
>>
>>
>>
>> Sent from my iPhone
>>
>> On Apr 3, 2018, at 9:32 AM, Yaniv Kaul > > wrote:
>>
>>
>>>
>>> On Tue, Apr 3, 2018 at 3:12 PM, TomK >> > wrote:
>>>
>>> Hey Guy's,
>>>
>>> If I'm looking to setup the oVirt engine in an HA
>>> configuration off the physical servers hosting my VM's (non
>>> self hosted), what are my options here?
>>>
>>> I want to setup two to four active oVirt engine instances
>>> elsewhere and handle the HA via something like haproxy /
>>> keepalived to keep the entire experience seamless to the user.
>>>
>>>
>>> You will need to set up the oVirt engine service as well as the PG
>>> database (and ovirt-engine-dwhd service and any other service we
>>> run next to the engine) as highly available module.
>>> In pacemaker[1], for example.
>>> You'll need to ensure configuration is also sync'ed between nodes,
>>> etc.
>>> Y.
>>>
>>
>> So already have one ovirt engine setup separately on a vm that
>> manages two remote physical hosts.  So familiar with the single host
>> approach which I would simply replicate.  At least that’s the idea
>> anyway.  Could you please expand a bit on the highly available
>> module and  syncing the config between hosts?
>>
>>
>> That's a different strategy, which is also legit - you treat this VM as a
>> highly available resource. Now you do not need to sync the config - just
>> the VM disk and config.
>>
>
> I think there's a postgres component too and if oVirt engine keeps all
> it's date on the postgres tables, then synchronizing this piece might be
> all I need?  I'm not sure how the separate oVirt engines sitting on various
> separate physical hosts keep their settings in sync about the rest of the
> physicals in an oVirt environment. (Assume we may have 100 oVirt physicals
> for example.)


There's more than just the database, although it contains 99% of what you
need. See the content of the result of 'engine-backup' command.
I think you might be somewhat confusing between the number of oVirt
hypervisors (we support hundreds) and the Engine - the management, which is
single - and with hosted-engine, it's a single, but highly available
virtual machine - that can run on one of several (I suggest 3-8) of those
hypervisors.
Y.


>
>
> Perhaps something like https://www.unixarena.com/2015
>> /12/rhel-7-pacemaker-configuring-ha-kvm-guest.html .
>>
>> But if you are already doing that, I'm not sure why you'd prefer this
>> over hosted-engine setup.
>>
>
> I'm comparing both options.  I really don't want to ask too many specific
> until I have the chance to read into the details of both.
>
> Y.
>>
>
> Cheers,
> Tom
>
>
>>
>> Cheers,
>> Tom
>>
>>
>>> [1] https://clusterlabs.org/quickstart-redhat.html
>>> 
>>>
>>>
>>> From what I've seen in oVirt, that seems to be possible
>>> without the two oVirt engines even knowing each other's
>>> existence but is it something anyone has ever done?  Any
>>> recommendations in this case?
>>>
>>> Having settings replicated would be a bonus but I would be
>>> comfortable if they weren't and I handle that myself.
>>>
>>> -- Cheers,
>>> Tom K.
>>> 
>>> -
>>>
>>> Living on earth is expensive, but it includes a free trip
>>> around the sun.
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users
>>> 
>>>
>>>
>>>
>>
>
> --
> Cheers,
> Tom K.
> 
> -
>
> Living on earth is expensive, but it includes a free trip around the sun.
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot update Node 4.2 to 4.2.2

2018-04-05 Thread Maton, Brett
shot inthe dark, but have you got EPEL repo enabled by any chance?

On 4 April 2018 at 20:20, Vincent Royer  wrote:

> Trying to update my nodes to 4.2.2, having a hard time.
>
> I updated the engine, no problems. Migrated VMs off host 1 and put it into
> maintenance.  I do a "check upgrade" in the GUI, it finds an update, but
> fails to install.
>
> Drop to CLI and try to do it manually
>
> yum update -y
>
> I get lots of dependency errors for missing packages.   A quick google
> shows that I may have the incorrect RPMs installed, and should only have
> the ovirt-node-ng-image and appliance rpms. So I try to install only those.
>
> So I think I have the right rpms now
>
>
> But now when I do a yum update it says there are no packages marked for
> update.
>
> How to fix this?   Is there anything I have done wrong that I shouldn't do
> on my other hosts?
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt non-self-hosted HA

2018-04-05 Thread Johan Bernhardsson
The norm is to have a cluster with shared storage. So you have 3 to 5 
hardware noed that shares storage for the hosted engine. That shared 
storage is in sync. So you don't have one engine per physical node.


If one hardware node goes down the engine is restarted on another node with 
the help of hosted-engine service.


/Johan

On April 5, 2018 08:11:50 TomK  wrote:

On 4/4/2018 3:11 AM, Yaniv Kaul wrote:


On Wed, Apr 4, 2018 at 12:39 AM, Tom > wrote:



Sent from my iPhone

On Apr 3, 2018, at 9:32 AM, Yaniv Kaul > wrote:



On Tue, Apr 3, 2018 at 3:12 PM, TomK > wrote:

Hey Guy's,

If I'm looking to setup the oVirt engine in an HA
configuration off the physical servers hosting my VM's (non
self hosted), what are my options here?

I want to setup two to four active oVirt engine instances
elsewhere and handle the HA via something like haproxy /
keepalived to keep the entire experience seamless to the user.


You will need to set up the oVirt engine service as well as the PG
database (and ovirt-engine-dwhd service and any other service we
run next to the engine) as highly available module.
In pacemaker[1], for example.
You'll need to ensure configuration is also sync'ed between nodes,
etc.
Y.

So already have one ovirt engine setup separately on a vm that
manages two remote physical hosts.  So familiar with the single host
approach which I would simply replicate.  At least that’s the idea
anyway.  Could you please expand a bit on the highly available
module and  syncing the config between hosts?


That's a different strategy, which is also legit - you treat this VM as
a highly available resource. Now you do not need to sync the config -
just the VM disk and config.

I think there's a postgres component too and if oVirt engine keeps all
it's date on the postgres tables, then synchronizing this piece might be
all I need?  I'm not sure how the separate oVirt engines sitting on
various separate physical hosts keep their settings in sync about the
rest of the physicals in an oVirt environment. (Assume we may have 100
oVirt physicals for example.)

Perhaps something like
https://www.unixarena.com/2015/12/rhel-7-pacemaker-configuring-ha-kvm-guest.html
.

But if you are already doing that, I'm not sure why you'd prefer this
over hosted-engine setup.

I'm comparing both options.  I really don't want to ask too many
specific until I have the chance to read into the details of both.

Y.

Cheers,
Tom



Cheers,
Tom


[1] https://clusterlabs.org/quickstart-redhat.html



From what I've seen in oVirt, that seems to be possible
without the two oVirt engines even knowing each other's
existence but is it something anyone has ever done?  Any
recommendations in this case?

Having settings replicated would be a bonus but I would be
comfortable if they weren't and I handle that myself.

--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip
around the sun.

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt non-self-hosted HA

2018-04-05 Thread TomK

On 4/4/2018 3:11 AM, Yaniv Kaul wrote:



On Wed, Apr 4, 2018 at 12:39 AM, Tom > wrote:




Sent from my iPhone

On Apr 3, 2018, at 9:32 AM, Yaniv Kaul > wrote:




On Tue, Apr 3, 2018 at 3:12 PM, TomK > wrote:

Hey Guy's,

If I'm looking to setup the oVirt engine in an HA
configuration off the physical servers hosting my VM's (non
self hosted), what are my options here?

I want to setup two to four active oVirt engine instances
elsewhere and handle the HA via something like haproxy /
keepalived to keep the entire experience seamless to the user.


You will need to set up the oVirt engine service as well as the PG
database (and ovirt-engine-dwhd service and any other service we
run next to the engine) as highly available module.
In pacemaker[1], for example.
You'll need to ensure configuration is also sync'ed between nodes,
etc.
Y.


So already have one ovirt engine setup separately on a vm that
manages two remote physical hosts.  So familiar with the single host
approach which I would simply replicate.  At least that’s the idea
anyway.  Could you please expand a bit on the highly available
module and  syncing the config between hosts?


That's a different strategy, which is also legit - you treat this VM as 
a highly available resource. Now you do not need to sync the config - 
just the VM disk and config.


I think there's a postgres component too and if oVirt engine keeps all 
it's date on the postgres tables, then synchronizing this piece might be 
all I need?  I'm not sure how the separate oVirt engines sitting on 
various separate physical hosts keep their settings in sync about the 
rest of the physicals in an oVirt environment. (Assume we may have 100 
oVirt physicals for example.)


Perhaps something like 
https://www.unixarena.com/2015/12/rhel-7-pacemaker-configuring-ha-kvm-guest.html 
.


But if you are already doing that, I'm not sure why you'd prefer this 
over hosted-engine setup.


I'm comparing both options.  I really don't want to ask too many 
specific until I have the chance to read into the details of both.



Y.


Cheers,
Tom




Cheers,
Tom



[1] https://clusterlabs.org/quickstart-redhat.html



From what I've seen in oVirt, that seems to be possible
without the two oVirt engines even knowing each other's
existence but is it something anyone has ever done?  Any
recommendations in this case?

Having settings replicated would be a bonus but I would be
comfortable if they weren't and I handle that myself.

-- 
Cheers,

Tom K.

-

Living on earth is expensive, but it includes a free trip
around the sun.

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users








--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users