On Mon, Jan 1, 2018 at 4:00 PM, Andrei V <andre...@starlett.lv> wrote:
> On 01/01/2018 10:10 AM, Yaniv Kaul wrote: > > > On Mon, Jan 1, 2018 at 12:50 AM, Andrei V <andre...@starlett.lv> wrote: > >> Hi ! >> >> I'm installing 2-node failover cluster (2 x Xeon servers with local RAID >> 5 / ext4 for oVirt storage domains). >> Now I have a dilemma - use either GlusterFS replica 2 or stick with NFS? >> > > Replica 2 is not good enough, as it can leave you with split brain. It's > been discussed in the mailing list several times. > How do you plan to achieve HA with NFS? With drbd? > > Hi, Yaniv, > Thanks a lot for detailed explanation! > > I know Replica 2 is not optimal solution. > Right now I have only 2 servers with internal RAIDs for nodes, and till > end of this week system had to be running in whatever condition. > May be its better to use local storage domain on each node, set export > domain on backup node, and backup VMs to 2nd backup node in timed interval? > Its not highly-available yet workable solution. > > 4.2 Engine is running on separate hardware. >> > > Is the Engine also highly available? > > > Its KVM appliance, could be launched on 2 SuSE servers. > > Each node have its own storage domain (on internal RAID). >> > > So some sort of replica 1 with geo-replication between them? > > > Could it be the following? > 1) Local storage domain on each node > 2) GlusterFS geo-replication or over these directories? Not sure this will > work. > > >> All VMs must be highly available. >> > > Without shared storage, it may be tricky. > > > Seems to be timely VM backup to 2nd node is enough for this time. > With current hardware anything above is too cumbersome to setup. > Agreed. Y. > > > > One of the VMs is an accounting/stock control system with FireBird SQL >> server on CentOS is speed-critical. >> > > But is IO the bottleneck? Are you using SSDs / NVMe drives? > I'm not familiar enough with FireBird SQL server - does it have an > application layer replication you might opt to use? > In such case, you could pass-through a NVM disk and have the application > layer perform the replication between the nodes. > > >> No load balancing between nodes necessary. 2nd is just for backup if 1st >> for whatever reason goes up in smoke. All VM disks must be replicated to >> backup node in near real-time or in worst case each 1 - 2 hour. >> GlusterFS solves this issue yet at high performance penalty. >> > > The problem with a passive backup is that you never know it'll really work > when needed. This is why active-active is many time preferred. > It's also more cost effective usually - instead of some HW lying around. > > >> >> >From what I read here >> http://lists.ovirt.org/pipermail/users/2017-July/083144.html >> GlusterFS performance with oVirt is not very good right now because QEMU >> uses FUSE instead of libgfapi. >> >> What is optimal way to go on ? >> > > It's hard to answer without additional details. > Y. > > >> Thanks in advance. >> Andrei >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users