Re: [ovirt-users] Q: Partitioning - oVirt 4.1 & GlusterFS 2-node System

2017-12-13 Thread Simone Tiraboschi
On Wed, Dec 13, 2017 at 11:51 PM, Andrei V  wrote:

> Hi, Donny,
>
> Thanks for the link.
>
> Am I understood correctly that I'm need at least 3-node system to run in
> failover mode? So far I'm plan to deploy only 2 nodes, either with hosted
> either with bare metal engine.
>
> *The key thing to keep in mind regarding host maintenance and downtime is
> that this converged  three node system relies on having at least two of the
> nodes up at all times. If you bring down  two machines at once, you'll run
> afoul of the Gluster quorum rules that guard us from split-brain states in
> our storage, the volumes served by your remaining host will go read-only,
> and the VMs stored on those volumes will pause and require a shutdown and
> restart in order to run again.*
>
> What happens if in 2-node glusterfs system (with hosted engine) one node
> goes down?
> Bare metal engine can manage this situation, but I'm not sure about hosted
> engine.
>

In order to be sure you cannot get affected by a split brain issue, you
need a full replica 3 env or at least replica 3 with an arbiter node:
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/

Otherwise if, for any reason (like a network split), you have two divergent
copies of the file you simply do not have enough information to
authoritatively pick the right copy and discard the other.


>
>
>
> On 12/13/2017 11:17 PM, Donny Davis wrote:
>
> I would start here
> https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-
> 4.1-and-gluster-storage/
>
> Pretty good basic guidance.
>
> Also with software defined storage its recommended their are at least two
> "storage" nodes and one arbiter node to maintain quorum.
>
> On Wed, Dec 13, 2017 at 3:45 PM, Andrei V  wrote:
>
>> Hi,
>>
>> I'm going to setup relatively simple 2-node system with oVirt 4.1,
>> GlusterFS, and several VMs running.
>> Each node going to be installed on dual Xeon system with single RAID 5.
>>
>> oVirt node installer uses relatively simple default partitioning scheme.
>> Should I leave it as is, or there are better options?
>> I never used GlusterFS before, so any expert opinion is very welcome.
>>
>> Thanks in advance.
>> Andrei
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: Partitioning - oVirt 4.1 & GlusterFS 2-node System

2017-12-13 Thread Artem Tambovskiy
Hi,

AFAIK, during hosted engine deployment installer will check the GlusterFS
replica type. And replica 3 is a mandatory requirement. Previously, i got
and idvise within this mailing list to look on DRDB solution if you do t
have a third node to to run at a GlusterFS replica 3.

14 дек. 2017 г. 1:51 пользователь "Andrei V"  написал:

> Hi, Donny,
>
> Thanks for the link.
>
> Am I understood correctly that I'm need at least 3-node system to run in
> failover mode? So far I'm plan to deploy only 2 nodes, either with hosted
> either with bare metal engine.
>
> *The key thing to keep in mind regarding host maintenance and downtime is
> that this converged  three node system relies on having at least two of the
> nodes up at all times. If you bring down  two machines at once, you'll run
> afoul of the Gluster quorum rules that guard us from split-brain states in
> our storage, the volumes served by your remaining host will go read-only,
> and the VMs stored on those volumes will pause and require a shutdown and
> restart in order to run again.*
>
> What happens if in 2-node glusterfs system (with hosted engine) one node
> goes down?
> Bare metal engine can manage this situation, but I'm not sure about hosted
> engine.
>
>
> On 12/13/2017 11:17 PM, Donny Davis wrote:
>
> I would start here
> https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-
> 4.1-and-gluster-storage/
>
> Pretty good basic guidance.
>
> Also with software defined storage its recommended their are at least two
> "storage" nodes and one arbiter node to maintain quorum.
>
> On Wed, Dec 13, 2017 at 3:45 PM, Andrei V  wrote:
>
>> Hi,
>>
>> I'm going to setup relatively simple 2-node system with oVirt 4.1,
>> GlusterFS, and several VMs running.
>> Each node going to be installed on dual Xeon system with single RAID 5.
>>
>> oVirt node installer uses relatively simple default partitioning scheme.
>> Should I leave it as is, or there are better options?
>> I never used GlusterFS before, so any expert opinion is very welcome.
>>
>> Thanks in advance.
>> Andrei
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: Partitioning - oVirt 4.1 & GlusterFS 2-node System

2017-12-13 Thread Andrei V
Hi, Donny,

Thanks for the link.

Am I understood correctly that I'm need at least 3-node system to run in
failover mode? So far I'm plan to deploy only 2 nodes, either with
hosted either with bare metal engine.

/The key thing to keep in mind regarding host maintenance and downtime
is that this *converged  three node system relies on having at least two
of the nodes up at all times*. If you bring down  two machines at once,
you'll run afoul of the Gluster quorum rules that guard us from
split-brain states in our storage, the volumes served by your remaining
host will go read-only, and the VMs stored on those volumes will pause
and require a shutdown and restart in order to run again./

What happens if in 2-node glusterfs system (with hosted engine) one node
goes down?
Bare metal engine can manage this situation, but I'm not sure about
hosted engine.


On 12/13/2017 11:17 PM, Donny Davis wrote:
> I would start here
> https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
>
> Pretty good basic guidance. 
>
> Also with software defined storage its recommended their are at least
> two "storage" nodes and one arbiter node to maintain quorum. 
>
> On Wed, Dec 13, 2017 at 3:45 PM, Andrei V  > wrote:
>
> Hi,
>
> I'm going to setup relatively simple 2-node system with oVirt 4.1,
> GlusterFS, and several VMs running.
> Each node going to be installed on dual Xeon system with single
> RAID 5.
>
> oVirt node installer uses relatively simple default partitioning
> scheme.
> Should I leave it as is, or there are better options?
> I never used GlusterFS before, so any expert opinion is very welcome.
>
> Thanks in advance.
> Andrei
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
> 
>
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Q: Partitioning - oVirt 4.1 & GlusterFS 2-node System

2017-12-13 Thread Donny Davis
I would start here
https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/

Pretty good basic guidance.

Also with software defined storage its recommended their are at least two
"storage" nodes and one arbiter node to maintain quorum.

On Wed, Dec 13, 2017 at 3:45 PM, Andrei V  wrote:

> Hi,
>
> I'm going to setup relatively simple 2-node system with oVirt 4.1,
> GlusterFS, and several VMs running.
> Each node going to be installed on dual Xeon system with single RAID 5.
>
> oVirt node installer uses relatively simple default partitioning scheme.
> Should I leave it as is, or there are better options?
> I never used GlusterFS before, so any expert opinion is very welcome.
>
> Thanks in advance.
> Andrei
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Q: Partitioning - oVirt 4.1 & GlusterFS 2-node System

2017-12-13 Thread Andrei V
Hi,

I'm going to setup relatively simple 2-node system with oVirt 4.1,
GlusterFS, and several VMs running.
Each node going to be installed on dual Xeon system with single RAID 5.

oVirt node installer uses relatively simple default partitioning scheme.
Should I leave it as is, or there are better options?
I never used GlusterFS before, so any expert opinion is very welcome.

Thanks in advance.
Andrei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users