[ovirt-users] Re: HCI Disaster Recovery

2020-01-10 Thread Strahil
It's actually not so easy.

The fastest way to recover  is just restore  from backup.
Otherwise  the flow  should  be:
1.  Install the new  node  (new hostname will be  easier).
2.  Use gluster's  replace-brick to change the dead brick with new one.
3.  Once ovirt 's  integration with gluster  detects the change, you will be 
able to forcefully remove  the  dead node.
4. Add  the newly installed node  to the relevant cluster (with or  without 
hosted-engine deployment)
5.  Test to move a low-priority VM to the new  host.
6.  Power  up  a test VM on the new host to test the functionality.
7. You can make the new node SPM and test snapahots and new vm creation.

As  you can see , in order to remove a missing node  - it should not be part of 
the Gluster Cluster.

Best Regards,
Strahil NikolovOn Jan 10, 2020 20:10, Christian Reiss wrote: > > Hey, > > is 
there really no ovirt native way to restore a single host and bring > it back 
into the cluster? > > -Chris. > > On 07.01.2020 09:54, Christian Reiss wrote: > 
> Hey folks, > > > >    - theoretical question, no live data in jeopardy - > > 
> > Let's say a 3-way HCI cluster is up and running, with engine running, > > 
all is well. The setup was done via gui, including gluster. > > > > Now I would 
kill a host, poweroff & disk wipe. Simulating a full node > > failure. > > > > 
The remaining nodes should keep running on (3 copy sync, no arbiter), > > vms 
keep running or will be restarted. I would reinstall using the ovirt > > node 
installer on the "failed" node. > > > > This would net me with a completely 
empty, no-gluster setup. What is the > > oVirt way to recover from this point 
onward? > > > > Thanks for your continued support! <3 > > -Christian. > > > > 
-- > Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon > 
    christ...@reiss.nrw  \ /    Campaign >  
 X   against HTML >   XMPP 
ch...@alpha-labs.net  / \   in eMails >   WEB  
christian-reiss.de, reiss.nrw > >   GPG Retrieval http://gpg.christian-reiss.de 
>   GPG ID ABCD43C5, 0x44E29126ABCD43C5 >   GPG fingerprint = 9549 F537 2596 
86BA 733C  A4ED 44E2 9126 ABCD 43C5 > >   "It's better to reign in hell than to 
serve in heaven.", >    John Milton, 
Paradise lost. > ___ > Users 
mailing list -- users@ovirt.org > To unsubscribe send an email to 
users-le...@ovirt.org > Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ > List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GQECSH72L4SMOWYT6WDADUHR7MMWVHP6/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VU67Z3ZPOPV3JIJPOHUACE3Q7V3BEY3I/


[ovirt-users] Re: HCI Disaster Recovery

2020-01-10 Thread Christian Reiss

Hey,

is there really no ovirt native way to restore a single host and bring 
it back into the cluster?


-Chris.

On 07.01.2020 09:54, Christian Reiss wrote:

Hey folks,

   - theoretical question, no live data in jeopardy -

Let's say a 3-way HCI cluster is up and running, with engine running, 
all is well. The setup was done via gui, including gluster.


Now I would kill a host, poweroff & disk wipe. Simulating a full node 
failure.


The remaining nodes should keep running on (3 copy sync, no arbiter), 
vms keep running or will be restarted. I would reinstall using the ovirt 
node installer on the "failed" node.


This would net me with a completely empty, no-gluster setup. What is the 
oVirt way to recover from this point onward?


Thanks for your continued support! <3
-Christian.



--
Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
   christ...@reiss.nrw  \ /Campaign
 X   against HTML
 XMPP ch...@alpha-labs.net  / \   in eMails
 WEB  christian-reiss.de, reiss.nrw

 GPG Retrieval http://gpg.christian-reiss.de
 GPG ID ABCD43C5, 0x44E29126ABCD43C5
 GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5

 "It's better to reign in hell than to serve in heaven.",
  John Milton, Paradise lost.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GQECSH72L4SMOWYT6WDADUHR7MMWVHP6/


[ovirt-users] Re: Ovirt OVN help needed

2020-01-10 Thread Strahil
Hi Miguel,

It seems the Cluster's switch is of type 'Linux Bridge'.

Best Regards,
Strahil NikolovOn Jan 10, 2020 12:37, Miguel Duarte de Mora Barroso 
 wrote:
>
> On Mon, Jan 6, 2020 at 9:21 PM Strahil Nikolov  wrote: 
> > 
> > Hi Miguel, 
> > 
> > I had read some blogs about OVN and I tried to collect some data that might 
> > hint where the issue is. 
> > 
> > I still struggle to "decode" that , but it may be easier for you or anyone 
> > on the list. 
> > 
> > I am eager to receive your reply. 
> > Thanks in advance and Happy New Year ! 
>
> Hi, 
>
> Sorry for not noticing your email before. Hope late is better than never .. 
>
> > 
> > 
> > Best Regards, 
> > Strahil Nikolov 
> > 
> > В сряда, 18 декември 2019 г., 21:10:31 ч. Гринуич+2, Strahil Nikolov 
> >  написа: 
> > 
> > 
> > That's a good question. 
> > ovirtmgmt is using linux bridge, but I'm not so sure about the br-int. 
> > 'brctl show' is not understanding what type is br-int , so I guess 
> > openvswitch. 
> > 
> > This is still a guess, so you can give me the command to verify that :) 
>
> You can use the GUI for that; access "Compute > clusters" , choose the 
> cluster in question, hit 'edit', then look for the 'Swtich type' 
> entry. 
>
>
> > 
> > As the system was first build on 4.2.7 , most probably it never used 
> > anything except openvswitch. 
> > 
> > Thanks in advance for your help. I really appreciate that. 
> > 
> > Best Regards, 
> > Strahil Nikolov 
> > 
> > 
> > В сряда, 18 декември 2019 г., 17:53:31 ч. Гринуич+2, Miguel Duarte de Mora 
> > Barroso  написа: 
> > 
> > 
> > On Wed, Dec 18, 2019 at 6:35 AM Strahil Nikolov  
> > wrote: 
> > > 
> > > Hi Dominik, 
> > > 
> > > sadly reinstall of all hosts is not helping. 
> > > 
> > > @ Miguel, 
> > > 
> > > I have 2 clusters 
> > > 1. Default (amd-based one) -> ovirt1 (192.168.1.90) & ovirt2 
> > > (192.168.1.64) 
> > > 2. Intel (intel-based one and a gluster arbiter) -> ovirt3 (192.168.1.41) 
> > 
> > But what are the switch types used on the clusters: openvswitch *or* 
> > legacy / linux bridges ? 
> > 
> > 
> > 
> > > 
> > > The output of the 2 commands (after I run reinstall on all hosts ): 
> > > 
> > > [root@engine ~]# ovn-sbctl list encap 
> > > _uuid  : d4d98c65-11da-4dc8-9da3-780e7738176f 
> > > chassis_name    : "baa0199e-d1a4-484c-af13-a41bcad19dbc" 
> > > ip  : "192.168.1.90" 
> > > options    : {csum="true"} 
> > > type    : geneve 
> > > 
> > > _uuid  : ed8744a5-a302-493b-8c3b-19a4d2e170de 
> > > chassis_name    : "25cc77b3-046f-45c5-af0c-ffb2f77d73f1" 
> > > ip  : "192.168.1.64" 
> > > options    : {csum="true"} 
> > > type    : geneve 
> > > 
> > > _uuid  : b72ff0ab-92fc-450c-a6eb-ab2869dee217 
> > > chassis_name    : "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3" 
> > > ip  : "192.168.1.41" 
> > > options    : {csum="true"} 
> > > type    : geneve 
> > > 
> > > 
> > > [root@engine ~]# ovn-sbctl list chassis 
> > > _uuid  : b1da5110-f477-4c60-9963-b464ab96c644 
> > > encaps  : [ed8744a5-a302-493b-8c3b-19a4d2e170de] 
> > > external_ids    : {datapath-type="", 
> > > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
> > >  ovn-bridge-mappings=""} 
> > > hostname    : "ovirt2.localdomain" 
> > > name    : "25cc77b3-046f-45c5-af0c-ffb2f77d73f1" 
> > > nb_cfg  : 0 
> > > transport_zones    : [] 
> > > vtep_logical_switches: [] 
> > > 
> > > _uuid  : dcc94e1c-bf44-46a3-b9d1-45360c307b26 
> > > encaps  : [b72ff0ab-92fc-450c-a6eb-ab2869dee217] 
> > > external_ids    : {datapath-type="", 
> > > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
> > >  ovn-bridge-mappings=""} 
> > > hostname    : "ovirt3.localdomain" 
> > > name    : "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3" 
> > > nb_cfg  : 0 
> > > transport_zones    : [] 
> > > vtep_logical_switches: [] 
> > > 
> > > _uuid  : 897b34c5-d1d1-41a7-b2fd-5f1fa203c1da 
> > > encaps  : [d4d98c65-11da-4dc8-9da3-780e7738176f] 
> > > external_ids    : {datapath-type="", 
> > > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
> > >  ovn-bridge-mappings=""} 
> > > hostname    : "ovirt1.localdomain" 
> > > name    : "baa0199e-d1a4-484c-af13-a41bcad19dbc" 
> > > nb_cfg  : 0 
> > > transport_zones    : [] 
> > > vtep_logical_switches: [] 
> > > 
> > > 
> > > If you know an easy method to reach default settings will be best, as I'm 
> > > currently not using OVN in production (just for tests and to learn more 
> > > about how it works) and I can afford any kind of downtime. 
> > > 
> > > Best Regards, 
> > > Strahil Nikolov 
> > > 
> > > В вторник, 17 декември 2019 г., 11:28:25 ч. Гринуич+2, 

[ovirt-users] Re: Hyperconverged setup - storage architecture - scaling

2020-01-10 Thread C Williams
Thank You Sahina !

That is great news ! Might be asking more questions as we work through this
.

Thanks Again

C Williams

On Fri, Jan 10, 2020 at 12:15 AM Sahina Bose  wrote:

>
>
> On Thu, Jan 9, 2020 at 10:22 PM C Williams 
> wrote:
>
>>   Hello,
>>
>> I did not see an answer to this ...
>>
>> "> 3. If the limit of hosts per datacenter is 250, then (in theory ) the
>> recomended way in reaching this treshold would be to create 20 separated
>> oVirt logical clusters with 12 nodes per each ( and datacenter managed from
>> one ha-engine ) ?"
>>
>> I have an existing oVirt datacenter with its own engine, hypervisors,
>> etc. Could I create hyperconverged clusters managed by my current
>> datacenter ? Ex. Cluster 1 -- 12 hyperconverged physical machines
>> (storage/compute), Cluster 2 -- 12 hyperconverged physical machines, etc.
>>
>
> Yes, you can add multiple clusters to be managed by your existing engine.
> The deployment flow would be different though, as the installation via
> cockpit also deploys the engine for the servers selected.
> You would need to create a custom ansible playbook that sets up the
> gluster volumes and add the hosts to the existing engine. (or do the
> creation of cluster and gluster volumes via the engine UI)
>
>
>> Please let me know.
>>
>> Thank You
>>
>> C Williams
>>
>> On Tue, Jan 29, 2019 at 4:21 AM Sahina Bose  wrote:
>>
>>> On Mon, Jan 28, 2019 at 6:22 PM Leo David  wrote:
>>> >
>>> > Hello Everyone,
>>> > Reading through the document:
>>> > "Red Hat Hyperconverged Infrastructure for Virtualization 1.5
>>> >  Automating RHHI for Virtualization deployment"
>>> >
>>> > Regarding storage scaling,  i see the following statements:
>>> >
>>> > 2.7. SCALING
>>> > Red Hat Hyperconverged Infrastructure for Virtualization is supported
>>> for one node, and for clusters of 3, 6, 9, and 12 nodes.
>>> > The initial deployment is either 1 or 3 nodes.
>>> > There are two supported methods of horizontally scaling Red Hat
>>> Hyperconverged Infrastructure for Virtualization:
>>> >
>>> > 1 Add new hyperconverged nodes to the cluster, in sets of three, up to
>>> the maximum of 12 hyperconverged nodes.
>>> >
>>> > 2 Create new Gluster volumes using new disks on existing
>>> hyperconverged nodes.
>>> > You cannot create a volume that spans more than 3 nodes, or expand an
>>> existing volume so that it spans across more than 3 nodes at a time
>>> >
>>> > 2.9.1. Prerequisites for geo-replication
>>> > Be aware of the following requirements and limitations when
>>> configuring geo-replication:
>>> > One geo-replicated volume only
>>> > Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for
>>> Virtualization) supports only one geo-replicated volume. Red Hat recommends
>>> backing up the volume that stores the data of your virtual machines, as
>>> this is usually contains the most valuable data.
>>> > --
>>> >
>>> > Also  in oVirtEngine UI, when I add a brick to an existing volume i
>>> get the following warning:
>>> >
>>> > "Expanding gluster volume in a hyper-converged setup is not
>>> recommended as it could lead to degraded performance. To expand storage for
>>> cluster, it is advised to add additional gluster volumes."
>>> >
>>> > Those things are raising a couple of questions that maybe for some for
>>> you guys are easy to answer, but for me it creates a bit of confusion...
>>> > I am also referring to RedHat product documentation,  because I  treat
>>> oVirt as production-ready as RHHI is.
>>>
>>> oVirt and RHHI though as close to each other as possible do differ in
>>> the versions used of the various components and the support
>>> limitations imposed.
>>> >
>>> > 1. Is there any reason for not going to distributed-replicated volumes
>>> ( ie: spread one volume across 6,9, or 12 nodes ) ?
>>> > - ie: is recomanded that in a 9 nodes scenario I should have 3
>>> separated volumes,  but how should I deal with the folowing question
>>>
>>> The reason for this limitation was a bug encountered when scaling a
>>> replica 3 volume to distribute-replica. This has since been fixed in
>>> the latest release of glusterfs.
>>>
>>> >
>>> > 2. If only one geo-replicated volume can be configured,  how should I
>>> deal with 2nd and 3rd volume replication for disaster recovery
>>>
>>> It is possible to have more than 1 geo-replicated volume as long as
>>> your network and CPU resources support this.
>>>
>>> >
>>> > 3. If the limit of hosts per datacenter is 250, then (in theory ) the
>>> recomended way in reaching this treshold would be to create 20 separated
>>> oVirt logical clusters with 12 nodes per each ( and datacenter managed from
>>> one ha-engine ) ?
>>> >
>>> > 4. In present, I have the folowing one 9 nodes cluster , all hosts
>>> contributing with 2 disks each  to a single replica 3 distributed
>>> replicated volume. They where added to the volume in the following order:
>>>   > node1 - disk1
>>> > node2 - disk1
>>> > ..
>>> > node9 - disk1
>>> > node1 - disk2
>>> > node2 

[ovirt-users] Re: Ovirt OVN help needed

2020-01-10 Thread Miguel Duarte de Mora Barroso
On Mon, Jan 6, 2020 at 9:21 PM Strahil Nikolov  wrote:
>
> Hi Miguel,
>
> I had read some blogs about OVN and I tried to collect some data that might 
> hint where the issue is.
>
> I still struggle to "decode" that , but it may be easier for you or anyone on 
> the list.
>
> I am eager to receive your reply.
> Thanks in advance and Happy New Year !

Hi,

Sorry for not noticing your email before. Hope late is better than never ..

>
>
> Best Regards,
> Strahil Nikolov
>
> В сряда, 18 декември 2019 г., 21:10:31 ч. Гринуич+2, Strahil Nikolov 
>  написа:
>
>
> That's a good question.
> ovirtmgmt is using linux bridge, but I'm not so sure about the br-int.
> 'brctl show' is not understanding what type is br-int , so I guess 
> openvswitch.
>
> This is still a guess, so you can give me the command to verify that :)

You can use the GUI for that; access "Compute > clusters" , choose the
cluster in question, hit 'edit', then look for the 'Swtich type'
entry.


>
> As the system was first build on 4.2.7 , most probably it never used anything 
> except openvswitch.
>
> Thanks in advance for your help. I really appreciate that.
>
> Best Regards,
> Strahil Nikolov
>
>
> В сряда, 18 декември 2019 г., 17:53:31 ч. Гринуич+2, Miguel Duarte de Mora 
> Barroso  написа:
>
>
> On Wed, Dec 18, 2019 at 6:35 AM Strahil Nikolov  wrote:
> >
> > Hi Dominik,
> >
> > sadly reinstall of all hosts is not helping.
> >
> > @ Miguel,
> >
> > I have 2 clusters
> > 1. Default (amd-based one) -> ovirt1 (192.168.1.90) & ovirt2 (192.168.1.64)
> > 2. Intel (intel-based one and a gluster arbiter) -> ovirt3 (192.168.1.41)
>
> But what are the switch types used on the clusters: openvswitch *or*
> legacy / linux bridges ?
>
>
>
> >
> > The output of the 2 commands (after I run reinstall on all hosts ):
> >
> > [root@engine ~]# ovn-sbctl list encap
> > _uuid  : d4d98c65-11da-4dc8-9da3-780e7738176f
> > chassis_name: "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> > ip  : "192.168.1.90"
> > options: {csum="true"}
> > type: geneve
> >
> > _uuid  : ed8744a5-a302-493b-8c3b-19a4d2e170de
> > chassis_name: "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> > ip  : "192.168.1.64"
> > options: {csum="true"}
> > type: geneve
> >
> > _uuid  : b72ff0ab-92fc-450c-a6eb-ab2869dee217
> > chassis_name: "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> > ip  : "192.168.1.41"
> > options: {csum="true"}
> > type: geneve
> >
> >
> > [root@engine ~]# ovn-sbctl list chassis
> > _uuid  : b1da5110-f477-4c60-9963-b464ab96c644
> > encaps  : [ed8744a5-a302-493b-8c3b-19a4d2e170de]
> > external_ids: {datapath-type="", 
> > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
> >  ovn-bridge-mappings=""}
> > hostname: "ovirt2.localdomain"
> > name: "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> > nb_cfg  : 0
> > transport_zones: []
> > vtep_logical_switches: []
> >
> > _uuid  : dcc94e1c-bf44-46a3-b9d1-45360c307b26
> > encaps  : [b72ff0ab-92fc-450c-a6eb-ab2869dee217]
> > external_ids: {datapath-type="", 
> > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
> >  ovn-bridge-mappings=""}
> > hostname: "ovirt3.localdomain"
> > name: "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> > nb_cfg  : 0
> > transport_zones: []
> > vtep_logical_switches: []
> >
> > _uuid  : 897b34c5-d1d1-41a7-b2fd-5f1fa203c1da
> > encaps  : [d4d98c65-11da-4dc8-9da3-780e7738176f]
> > external_ids: {datapath-type="", 
> > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
> >  ovn-bridge-mappings=""}
> > hostname: "ovirt1.localdomain"
> > name: "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> > nb_cfg  : 0
> > transport_zones: []
> > vtep_logical_switches: []
> >
> >
> > If you know an easy method to reach default settings will be best, as I'm 
> > currently not using OVN in production (just for tests and to learn more 
> > about how it works) and I can afford any kind of downtime.
> >
> > Best Regards,
> > Strahil Nikolov
> >
> > В вторник, 17 декември 2019 г., 11:28:25 ч. Гринуич+2, Miguel Duarte de 
> > Mora Barroso  написа:
> >
> >
> > On Tue, Dec 17, 2019 at 10:19 AM Miguel Duarte de Mora Barroso
> >  wrote:
> > >
> > > On Tue, Dec 17, 2019 at 9:17 AM Dominik Holler  wrote:
> > > >
> > > >
> > > >
> > > > On Tue, Dec 17, 2019 at 6:28 AM Strahil  wrote:
> > > >>
> > > >> Hi Dominik,
> > > >>
> > > >> Thanks for your reply.
> > > >>
> > > >> On ovirt1 I got the following:
> > > >> [root@ovirt1 openvswitch]# less  ovn-controller.log-20191216.gz
> > > >> 2019-12-15T01:49:02.988Z|00032|vlog|INFO|opened log file 
> > > >> 

[ovirt-users] Re: Setting up cockpit?

2020-01-10 Thread m . skrzetuski
I managed to supply my own SSL certificate and start Cockpit but the 
ovirt-cockpit-sso service is all messed up so you need to configure a Linux 
user with password to login. The SSO service logs following errors.

Jan 09 23:21:09 localhost.localdomain systemd[1]: Starting oVirt-Cockpit SSO 
service...
Jan 09 23:21:09 localhost.localdomain prestart.sh[2669]: /bin/ln: failed to 
create symbolic link 
‘/usr/share/ovirt-cockpit-sso/config/cockpit/ws-certs.d/ws-certs.d’: File exists
Jan 09 23:21:09 localhost.localdomain systemd[1]: Started oVirt-Cockpit SSO 
service.
Jan 09 23:21:09 localhost.localdomain start.sh[2676]: (standard_in) 1: syntax 
error
Jan 09 23:21:09 localhost.localdomain start.sh[2676]: Installed cockpit version:
Jan 09 23:21:09 localhost.localdomain start.sh[2676]: 
/usr/share/ovirt-cockpit-sso/start.sh: line 9: [: : integer expression expected
Jan 09 23:21:09 localhost.localdomain start.sh[2676]: Installed Cockpit version 
is old, at least 140 is required for ovirt-cockpit SSO

I'd file a bug in bugzilla but as I stated before it's not working either.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MYXGROXZYHCBEWFMVXVPLOF5N3BEKXG6/


[ovirt-users] Re: Setting up cockpit?

2020-01-10 Thread m . skrzetuski
I tried to file a bug in Redhat bugzilla several times now but it's broken and 
it's frustrating as hell. Now I get the following error. Open source is such a 
beautiful world.

Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request POST /post_bug.cgi.

Reason: Error reading from remote server

Apache Server at bugzilla.redhat.com Port 443
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3STO6T7JHYJDCFOCD2UODJAZBM4EQ7H2/


[ovirt-users] Re: is there any feature of load balancing for engine??

2020-01-10 Thread yam yam
Thanks David,

I've just checked it and that's what I want.
an engine seems quite enough in the usual case.

Best regards,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7PBL5APRLWX3QFNMCWCF33WNZA52CTMR/


[ovirt-users] Re: is there any feature of load balancing for engine??

2020-01-10 Thread yam yam
As you said, adding a separate engine seems to be far far better.
And I've just checked 'Supported Limits for Red Hat Virtualization' which 
describes spec as you said.

thanks!! :)

Best Regards,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HT3KF5MS3ZVUHIHTA6JTUWVYAU56JBBR/