Re: [ovirt-users] moving storage away from a single point of failure

2015-09-25 Thread Nicolas Ecarnot
Le 25/09/2015 01:57, Donny Davis a écrit : Gluster is pretty stable, you shouldn't have any issues. It works best when there are more than 2 or 3 nodes though. Hi, On a site, I have an oVirt setup made of 3 nodes acting as compute+storage based on gluster, plus another standalone engine. The

Re: [ovirt-users] moving storage away from a single point of failure

2015-09-25 Thread Donny Davis
I don't have a large gluster enviroment deployed on hardware, so I have no data. On Fri, Sep 25, 2015 at 2:55 AM, Nicolas Ecarnot wrote: > Le 25/09/2015 01:57, Donny Davis a écrit : > >> Gluster is pretty stable, you shouldn't have any issues. It works best >> when there

Re: [ovirt-users] moving storage away from a single point of failure

2015-09-24 Thread Alan Murrell
On 22/09/15 02:32 AM, Daniel Helgenberger wrote: > - Do not run compute and storage on the same hosts Is the Engine considered to be the "Compute" part of things? Regards, Alan ___ Users mailing list Users@ovirt.org

Re: [ovirt-users] moving storage away from a single point of failure

2015-09-24 Thread Michael Kleinpaste
I thought I had read where Gluster had corrected this behavior. That's disappointing. On Tue, Sep 22, 2015 at 4:18 AM Alastair Neil wrote: > My own experience with gluster for VMs is that it is just fine until you > need to bring down a node and need the VM's to be live.

Re: [ovirt-users] moving storage away from a single point of failure

2015-09-22 Thread Daniel Helgenberger
On 18.09.2015 23:04, Robert Story wrote: > Hi, Hello Robert, > > I'm running oVirt 3.5 in our lab, and currently I'm using NFS to a single > server. I'd like to move away from having a single point of failure. In this case have a look at iSCSI or FC storage. If you have redundant contollers

Re: [ovirt-users] moving storage away from a single point of failure

2015-09-22 Thread Alastair Neil
My own experience with gluster for VMs is that it is just fine until you need to bring down a node and need the VM's to be live. I have a replica 3 gluster server and, while the VMs are fine while the node is down, when it is brought back up, gluster attempts to heal the files on the downed node