Re: [ovirt-users] moving storage away from a single point of failure

2015-09-25 Thread Nicolas Ecarnot

Le 25/09/2015 01:57, Donny Davis a écrit :

Gluster is pretty stable, you shouldn't have any issues. It works best
when there are more than 2 or 3 nodes though.


Hi,

On a site, I have an oVirt setup made of 3 nodes acting as 
compute+storage based on gluster, plus another standalone engine.
The replica-3 seems to do some good job even when I test-stressed them 
brutaly.
But as I have the opportunity to add more storage nodes, I'm interested 
in the comment above.


When adding another node, what is the effect on the duration of a 
rebuild when recovering from a crash? What is the effect on perfs?


Regards,

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving storage away from a single point of failure

2015-09-25 Thread Donny Davis
I don't have a large gluster enviroment deployed on hardware, so I have no
data.

On Fri, Sep 25, 2015 at 2:55 AM, Nicolas Ecarnot 
wrote:

> Le 25/09/2015 01:57, Donny Davis a écrit :
>
>> Gluster is pretty stable, you shouldn't have any issues. It works best
>> when there are more than 2 or 3 nodes though.
>>
>
> Hi,
>
> On a site, I have an oVirt setup made of 3 nodes acting as compute+storage
> based on gluster, plus another standalone engine.
> The replica-3 seems to do some good job even when I test-stressed them
> brutaly.
> But as I have the opportunity to add more storage nodes, I'm interested in
> the comment above.
>
> When adding another node, what is the effect on the duration of a rebuild
> when recovering from a crash? What is the effect on perfs?
>
> Regards,
>
> --
> Nicolas ECARNOT
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Donny Davis
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving storage away from a single point of failure

2015-09-24 Thread Alan Murrell


On 22/09/15 02:32 AM, Daniel Helgenberger wrote:
> - Do not run compute and storage on the same hosts

Is the Engine considered to be the "Compute" part of things?

Regards,

Alan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving storage away from a single point of failure

2015-09-24 Thread Michael Kleinpaste
I thought I had read where Gluster had corrected this behavior.  That's
disappointing.

On Tue, Sep 22, 2015 at 4:18 AM Alastair Neil  wrote:

> My own experience with gluster for VMs is that it is just fine until you
> need to bring down a node and need the VM's to be live.  I have a replica 3
> gluster server and, while the VMs are fine while the node is down, when it
> is brought back up, gluster attempts to heal the files on the downed node
> and the ensuing i/o freezes the VM's until the heal is complete, and with
> many VM's on a storage volume that can take hours.  I have migrated all my
> critical VMs back onto NFS.   There are changes coming soon in gluster that
> will hopefully mitigate this (better granualarity in the data heals, i/o
> throttling during heals etc.)  but for now I am keeping most of my VMs on
> nfs.
>
> The alternative is to set the quorum so that the VM volume goes read only
> when a node goes down.  This may seem mad, but at least your VMs are frozen
> only while a node is down and not for hours afterwards.
>
>
>
> On 22 September 2015 at 05:32, Daniel Helgenberger <
> daniel.helgenber...@m-box.de> wrote:
>
>>
>>
>> On 18.09.2015 23:04, Robert Story wrote:
>> > Hi,
>>
>> Hello Robert,
>>
>> >
>> > I'm running oVirt 3.5 in our lab, and currently I'm using NFS to a
>> single
>> > server. I'd like to move away from having a single point of failure.
>>
>> In this case have a look at iSCSI or FC storage. If you have redundant
>> contollers and switches
>> the setup should be reliable enough?
>>
>> > Watching the mailing list, all the issues with gluster getting out of
>> sync
>> > and replica issues has me nervous about gluster, plus I just have 2
>> > machines with lots of drive bays for storage.
>>
>> Still, I would stick to gluster if you want a replicated storage:
>>  - It is supported out of the box and you get active support from lots of
>> users here
>>  - Replica3 will solve most out of sync cases
>>  - I dare say other replicated storage backends do suffer from the same
>> issues, this is by design.
>>
>> Two things you should keep in mind when running gluster in production:
>>  - Do not run compute and storage on the same hosts
>>  - Do not (yet) use Gluster as storage for Hosted Engine
>>
>> > I've been reading about GFS2
>> > and DRBD, and wanted opinions on if either is a good/bad idea, or to
>> see if
>> > there are other alternatives.
>> >
>> > My oVirt setup is currently 5 nodes and about 25 VMs, might double in
>> size
>> > eventually, but probably won't get much bigger than that.
>>
>> In the end, it is quite easy to migrate storage domains. If you are
>> satisfied with your lab
>> setup, put it in production and add storage later and move the disks.
>> Afterwards, remove old
>> storage domains.
>>
>> My to cent with gluster: It runs quite stable since some time now if you
>> do not touch it.
>> I never had issues when adding bricks, though removing and replacing them
>> can be very tricky.
>>
>> HTH,
>>
>> >
>> >
>> > Thanks,
>> >
>> > Robert
>> >
>>
>> --
>> Daniel Helgenberger
>> m box bewegtbild GmbH
>>
>> P: +49/30/2408781-22
>> F: +49/30/2408781-10
>>
>> ACKERSTR. 19
>> D-10115 BERLIN
>>
>>
>> www.m-box.de  www.monkeymen.tv
>>
>> Geschäftsführer: Martin Retschitzegger / Michaela Göllner
>> Handeslregister: Amtsgericht Charlottenburg / HRB 112767
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
-- 
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
michael.kleinpa...@sharperlending.com
(509) 324-1230   Fax: (509) 324-1234
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving storage away from a single point of failure

2015-09-22 Thread Daniel Helgenberger


On 18.09.2015 23:04, Robert Story wrote:
> Hi,

Hello Robert,

> 
> I'm running oVirt 3.5 in our lab, and currently I'm using NFS to a single
> server. I'd like to move away from having a single point of failure.

In this case have a look at iSCSI or FC storage. If you have redundant 
contollers and switches
the setup should be reliable enough?

> Watching the mailing list, all the issues with gluster getting out of sync
> and replica issues has me nervous about gluster, plus I just have 2
> machines with lots of drive bays for storage.

Still, I would stick to gluster if you want a replicated storage:
 - It is supported out of the box and you get active support from lots of users 
here
 - Replica3 will solve most out of sync cases
 - I dare say other replicated storage backends do suffer from the same issues, 
this is by design.

Two things you should keep in mind when running gluster in production:
 - Do not run compute and storage on the same hosts
 - Do not (yet) use Gluster as storage for Hosted Engine

> I've been reading about GFS2
> and DRBD, and wanted opinions on if either is a good/bad idea, or to see if
> there are other alternatives.
> 
> My oVirt setup is currently 5 nodes and about 25 VMs, might double in size
> eventually, but probably won't get much bigger than that.

In the end, it is quite easy to migrate storage domains. If you are satisfied 
with your lab
setup, put it in production and add storage later and move the disks. 
Afterwards, remove old
storage domains.

My to cent with gluster: It runs quite stable since some time now if you do not 
touch it.
I never had issues when adding bricks, though removing and replacing them can 
be very tricky.

HTH,

> 
> 
> Thanks,
> 
> Robert
> 

-- 
Daniel Helgenberger
m box bewegtbild GmbH

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19
D-10115 BERLIN


www.m-box.de  www.monkeymen.tv

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving storage away from a single point of failure

2015-09-22 Thread Alastair Neil
My own experience with gluster for VMs is that it is just fine until you
need to bring down a node and need the VM's to be live.  I have a replica 3
gluster server and, while the VMs are fine while the node is down, when it
is brought back up, gluster attempts to heal the files on the downed node
and the ensuing i/o freezes the VM's until the heal is complete, and with
many VM's on a storage volume that can take hours.  I have migrated all my
critical VMs back onto NFS.   There are changes coming soon in gluster that
will hopefully mitigate this (better granualarity in the data heals, i/o
throttling during heals etc.)  but for now I am keeping most of my VMs on
nfs.

The alternative is to set the quorum so that the VM volume goes read only
when a node goes down.  This may seem mad, but at least your VMs are frozen
only while a node is down and not for hours afterwards.



On 22 September 2015 at 05:32, Daniel Helgenberger <
daniel.helgenber...@m-box.de> wrote:

>
>
> On 18.09.2015 23:04, Robert Story wrote:
> > Hi,
>
> Hello Robert,
>
> >
> > I'm running oVirt 3.5 in our lab, and currently I'm using NFS to a single
> > server. I'd like to move away from having a single point of failure.
>
> In this case have a look at iSCSI or FC storage. If you have redundant
> contollers and switches
> the setup should be reliable enough?
>
> > Watching the mailing list, all the issues with gluster getting out of
> sync
> > and replica issues has me nervous about gluster, plus I just have 2
> > machines with lots of drive bays for storage.
>
> Still, I would stick to gluster if you want a replicated storage:
>  - It is supported out of the box and you get active support from lots of
> users here
>  - Replica3 will solve most out of sync cases
>  - I dare say other replicated storage backends do suffer from the same
> issues, this is by design.
>
> Two things you should keep in mind when running gluster in production:
>  - Do not run compute and storage on the same hosts
>  - Do not (yet) use Gluster as storage for Hosted Engine
>
> > I've been reading about GFS2
> > and DRBD, and wanted opinions on if either is a good/bad idea, or to see
> if
> > there are other alternatives.
> >
> > My oVirt setup is currently 5 nodes and about 25 VMs, might double in
> size
> > eventually, but probably won't get much bigger than that.
>
> In the end, it is quite easy to migrate storage domains. If you are
> satisfied with your lab
> setup, put it in production and add storage later and move the disks.
> Afterwards, remove old
> storage domains.
>
> My to cent with gluster: It runs quite stable since some time now if you
> do not touch it.
> I never had issues when adding bricks, though removing and replacing them
> can be very tricky.
>
> HTH,
>
> >
> >
> > Thanks,
> >
> > Robert
> >
>
> --
> Daniel Helgenberger
> m box bewegtbild GmbH
>
> P: +49/30/2408781-22
> F: +49/30/2408781-10
>
> ACKERSTR. 19
> D-10115 BERLIN
>
>
> www.m-box.de  www.monkeymen.tv
>
> Geschäftsführer: Martin Retschitzegger / Michaela Göllner
> Handeslregister: Amtsgericht Charlottenburg / HRB 112767
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users