Le 2015-10-12 14:04, Nir Soffer a écrit : 

>> On Mon, Oct 12, 2015 at 11:14 AM, Nico <glus...@distran.org> wrote:
> Yes, engine will let you use such volume in 3.5 - this is a bug. In 3.6 you 
> will
> not be able to use such setup.
> replica 2 fails in a very bad way when one brick is down; the
> application may get
> stale data, and this breaks sanlock. You will be get stuck with spm
> that cannot be
> stopped and other fun stuff.
> You don't want to go in this direction, and we will not be able to support 
> that.

For the record, I already rebooted node1; and the node2 took over the
existing VM from node 1 and vice-versa. 

GlusterFS worked fine, oVirt application was still working fine .. i
guess it is because it was a soft reboot which stops softly the

I got another case where i stuck the network on the 2 nodes
simultaneously after a bad manipulation on oVirt GUI and i got a split

i kept the error at this very moment: 

root@devnix-virt-master02 nets]# gluster volume heal ovirt info
Brick devnix-virt-master01:/gluster/ovirt/
Number of entries in split-brain: 1 

Brick devnix-virt-master02:/gluster/ovirt/
Number of entries in split-brain: 1 

This file was having same size on both nodes; so it was hard to select
one. Finally i chose the younger one and all was back online after the

It is this kind of stuff you are talking about with 2 nodes ? 

For now, I don't have budget to take a third one; so i'm a bit stuck and

I've a third device but for backup, it has lot of storage but low cpu
abilities (no VT-X) so i can't use it as hypervisor. 

I could maybe use it as a third brick but is it possible to have this
kind of configuration ? 2 actives nodes as hypervisor and 1 third only
for gluster replica 3 ? 



Users mailing list

Reply via email to