Hi, AFAIK, during hosted engine deployment installer will check the GlusterFS replica type. And replica 3 is a mandatory requirement. Previously, i got and idvise within this mailing list to look on DRDB solution if you do t have a third node to to run at a GlusterFS replica 3.
14 дек. 2017 г. 1:51 пользователь "Andrei V" <[email protected]> написал: > Hi, Donny, > > Thanks for the link. > > Am I understood correctly that I'm need at least 3-node system to run in > failover mode? So far I'm plan to deploy only 2 nodes, either with hosted > either with bare metal engine. > > *The key thing to keep in mind regarding host maintenance and downtime is > that this converged three node system relies on having at least two of the > nodes up at all times. If you bring down two machines at once, you'll run > afoul of the Gluster quorum rules that guard us from split-brain states in > our storage, the volumes served by your remaining host will go read-only, > and the VMs stored on those volumes will pause and require a shutdown and > restart in order to run again.* > > What happens if in 2-node glusterfs system (with hosted engine) one node > goes down? > Bare metal engine can manage this situation, but I'm not sure about hosted > engine. > > > On 12/13/2017 11:17 PM, Donny Davis wrote: > > I would start here > https://ovirt.org/blog/2017/04/up-and-running-with-ovirt- > 4.1-and-gluster-storage/ > > Pretty good basic guidance. > > Also with software defined storage its recommended their are at least two > "storage" nodes and one arbiter node to maintain quorum. > > On Wed, Dec 13, 2017 at 3:45 PM, Andrei V <[email protected]> wrote: > >> Hi, >> >> I'm going to setup relatively simple 2-node system with oVirt 4.1, >> GlusterFS, and several VMs running. >> Each node going to be installed on dual Xeon system with single RAID 5. >> >> oVirt node installer uses relatively simple default partitioning scheme. >> Should I leave it as is, or there are better options? >> I never used GlusterFS before, so any expert opinion is very welcome. >> >> Thanks in advance. >> Andrei >> _______________________________________________ >> Users mailing list >> [email protected] >> http://lists.ovirt.org/mailman/listinfo/users >> > > > > _______________________________________________ > Users mailing list > [email protected] > http://lists.ovirt.org/mailman/listinfo/users > >
_______________________________________________ Users mailing list [email protected] http://lists.ovirt.org/mailman/listinfo/users

