Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-28 Thread Goorkate, B.J.
Hi,

I currently have a couple of VMs with little disk I/O, so I will
put them on the 4th node. 

I can even use the 4th node to deploy a brick if one of the replica-3
nodes fails.

Thanks!

Regards,

Bertjan

On Wed, Sep 28, 2016 at 11:50:21AM +0530, Sahina Bose wrote:
> 
> 
> On Tue, Sep 27, 2016 at 8:59 PM, Goorkate, B.J. 
> wrote:
> 
> Hi Sahina,
> 
> First: sorry for my delayed response. I wasn't able to respond earlier.
> 
> I already planned on adding the 4th node as a gluster client, so thank you
> for
> confirming that this works.
> 
> Why I was in doubt is that certain VMs with a lot of storage I/O on the 
> 4th
> node have to
> replicate to 3 other hosts (the replica-3 gluster nodes) over the storage
> network, while
> a VM on 1 of the replica-3 gluster nodes only has to replicate to two 
> other
> nodes over
> the network, thus creating less network traffic.
> 
> Does this make sense?
> 
> And if it does: can that be an issue?
> 
> 
> IIUC, the 4th node that you add to the cluster is serving only compute and
> there is no storage (bricks) capacity added. In this case, yes, all reads and
> writes are over the network - this is like a standard oVirt deployment where
> storage is over the network (non hyper converged).
> While thoeretically this looks like an issue, it may not be, as there are
> multiple factors affecting performance. You will need to measure the impact on
> guest performance when VMs run on this node and see if it is acceptable to 
> you.
> One thing you could do is schedule VMs  that do not have stringent perf
> requirements on the 4th node?
> 
> There are also improvements planned in upcoming releases of gluster which
> improve the I/O performance further (compound FOPS, libgfapi access), so
> whatever you see now should improve further.
> 
> 
> 
> Regards,
> 
> Bertjan
> 
> On Fri, Sep 23, 2016 at 04:47:25PM +0530, Sahina Bose wrote:
> >
> >
> > On Fri, Sep 23, 2016 at 4:14 PM, Davide Ferrari 
> wrote:
> >
> > I'm struggling with the same problem (I say struggling because I'm
> still
> > having stability issues for what i consider a stable cluster) but 
> you
> can:
> > - create a replica 3 engine gluster volume
> > - create replica 2 data, iso and export volumes
> >
> >
> > What are the stability issues you're facing? Data volume if used as a
> data
> > storage domain should be a replica 3 volume as well.
> >
> >
> >
> > Deploy the hosted-engine on the first VM (with the engine volume)
> froom the
> > CLI, then log in Ovirt admin, enable gluster support, install *and
> deploy*
> > from the GUI host2 and host3 (where the engine bricks are) and then
> install
> > host4 without deploying. This should get you the 4 hosts online, but
> the
> > engine will run only on the first 3
> >
> >
> > Right. You can add the 4th node to the cluster, but not have any bricks
> on this
> > volume in which case VMs will be run on this node but will access data
> from the
> > other 3 nodes.
> >
> >
> >
> > 2016-09-23 11:14 GMT+02:00 Goorkate, B.J. 
>  >:
> >
> > Dear all,
> >
> > I've tried to find a way to add a 4th oVirt-node to my existing
> > 3-node setup with replica-3 gluster storage, but found no usable
> > solution yet.
> >
> > From what I read, it's not wise to create a replica-4 gluster
> > storage, because of bandwith overhead.
> >
> > Is there a safe way to do this and still have 4 equal oVirt
> nodes?
> >
> > Thanks in advance!
> >
> > Regards,
> >
> > Bertjan
> >
> > 
> > --
> >
> > De informatie opgenomen in dit bericht kan vertrouwelijk zijn en
> is
> > uitsluitend bestemd voor de geadresseerde. Indien u dit bericht
> > onterecht
> > ontvangt, wordt u verzocht de inhoud niet te gebruiken en de
> afzender
> > direct
> > te informeren door het bericht te retourneren. Het Universitair
> Medisch
> > Centrum Utrecht is een publiekrechtelijke rechtspersoon in de 
> zin
> van
> > de W.H.W.
> > (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat
> > geregistreerd bij
> > de Kamer van Koophandel voor Midden-Nederland onder nr. 
> 30244197.
> >
> > Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> >
> > 
> > --
> >
> > This message may contain confidential information and is 
> in

Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-27 Thread Sahina Bose
On Tue, Sep 27, 2016 at 8:59 PM, Goorkate, B.J. 
wrote:

> Hi Sahina,
>
> First: sorry for my delayed response. I wasn't able to respond earlier.
>
> I already planned on adding the 4th node as a gluster client, so thank you
> for
> confirming that this works.
>
> Why I was in doubt is that certain VMs with a lot of storage I/O on the
> 4th node have to
> replicate to 3 other hosts (the replica-3 gluster nodes) over the storage
> network, while
> a VM on 1 of the replica-3 gluster nodes only has to replicate to two
> other nodes over
> the network, thus creating less network traffic.
>
> Does this make sense?
>
> And if it does: can that be an issue?
>

IIUC, the 4th node that you add to the cluster is serving only compute and
there is no storage (bricks) capacity added. In this case, yes, all reads
and writes are over the network - this is like a standard oVirt deployment
where storage is over the network (non hyper converged).
While thoeretically this looks like an issue, it may not be, as there are
multiple factors affecting performance. You will need to measure the impact
on guest performance when VMs run on this node and see if it is acceptable
to you. One thing you could do is schedule VMs  that do not have stringent
perf requirements on the 4th node?

There are also improvements planned in upcoming releases of gluster which
improve the I/O performance further (compound FOPS, libgfapi access), so
whatever you see now should improve further.


> Regards,
>
> Bertjan
>
> On Fri, Sep 23, 2016 at 04:47:25PM +0530, Sahina Bose wrote:
> >
> >
> > On Fri, Sep 23, 2016 at 4:14 PM, Davide Ferrari 
> wrote:
> >
> > I'm struggling with the same problem (I say struggling because I'm
> still
> > having stability issues for what i consider a stable cluster) but
> you can:
> > - create a replica 3 engine gluster volume
> > - create replica 2 data, iso and export volumes
> >
> >
> > What are the stability issues you're facing? Data volume if used as a
> data
> > storage domain should be a replica 3 volume as well.
> >
> >
> >
> > Deploy the hosted-engine on the first VM (with the engine volume)
> froom the
> > CLI, then log in Ovirt admin, enable gluster support, install *and
> deploy*
> > from the GUI host2 and host3 (where the engine bricks are) and then
> install
> > host4 without deploying. This should get you the 4 hosts online, but
> the
> > engine will run only on the first 3
> >
> >
> > Right. You can add the 4th node to the cluster, but not have any bricks
> on this
> > volume in which case VMs will be run on this node but will access data
> from the
> > other 3 nodes.
> >
> >
> >
> > 2016-09-23 11:14 GMT+02:00 Goorkate, B.J. <
> b.j.goork...@umcutrecht.nl>:
> >
> > Dear all,
> >
> > I've tried to find a way to add a 4th oVirt-node to my existing
> > 3-node setup with replica-3 gluster storage, but found no usable
> > solution yet.
> >
> > From what I read, it's not wise to create a replica-4 gluster
> > storage, because of bandwith overhead.
> >
> > Is there a safe way to do this and still have 4 equal oVirt
> nodes?
> >
> > Thanks in advance!
> >
> > Regards,
> >
> > Bertjan
> >
> > 
> > --
> >
> > De informatie opgenomen in dit bericht kan vertrouwelijk zijn en
> is
> > uitsluitend bestemd voor de geadresseerde. Indien u dit bericht
> > onterecht
> > ontvangt, wordt u verzocht de inhoud niet te gebruiken en de
> afzender
> > direct
> > te informeren door het bericht te retourneren. Het Universitair
> Medisch
> > Centrum Utrecht is een publiekrechtelijke rechtspersoon in de
> zin van
> > de W.H.W.
> > (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat
> > geregistreerd bij
> > de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
> >
> > Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> >
> > 
> > --
> >
> > This message may contain confidential information and is intended
> > exclusively
> > for the addressee. If you receive this message unintentionally,
> please
> > do not
> > use the contents but notify the sender immediately by return
> e-mail.
> > University
> > Medical Center Utrecht is a legal person by public law and is
> > registered at
> > the Chamber of Commerce for Midden-Nederland under no. 30244197.
> >
> > Please consider the environment before printing this e-mail.
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> >
> >
> > --
> > Davide Ferrari
> >

Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-27 Thread Goorkate, B.J.
Hi Sahina,

First: sorry for my delayed response. I wasn't able to respond earlier.

I already planned on adding the 4th node as a gluster client, so thank you for
confirming that this works.

Why I was in doubt is that certain VMs with a lot of storage I/O on the 4th 
node have to
replicate to 3 other hosts (the replica-3 gluster nodes) over the storage 
network, while
a VM on 1 of the replica-3 gluster nodes only has to replicate to two other 
nodes over
the network, thus creating less network traffic. 

Does this make sense?

And if it does: can that be an issue? 

Regards,

Bertjan

On Fri, Sep 23, 2016 at 04:47:25PM +0530, Sahina Bose wrote:
> 
> 
> On Fri, Sep 23, 2016 at 4:14 PM, Davide Ferrari  wrote:
> 
> I'm struggling with the same problem (I say struggling because I'm still
> having stability issues for what i consider a stable cluster) but you can:
> - create a replica 3 engine gluster volume
> - create replica 2 data, iso and export volumes
> 
> 
> What are the stability issues you're facing? Data volume if used as a data
> storage domain should be a replica 3 volume as well.
>  
> 
> 
> Deploy the hosted-engine on the first VM (with the engine volume) froom 
> the
> CLI, then log in Ovirt admin, enable gluster support, install *and deploy*
> from the GUI host2 and host3 (where the engine bricks are) and then 
> install
> host4 without deploying. This should get you the 4 hosts online, but the
> engine will run only on the first 3
> 
> 
> Right. You can add the 4th node to the cluster, but not have any bricks on 
> this
> volume in which case VMs will be run on this node but will access data from 
> the
> other 3 nodes.
>  
> 
> 
> 2016-09-23 11:14 GMT+02:00 Goorkate, B.J. :
> 
> Dear all,
> 
> I've tried to find a way to add a 4th oVirt-node to my existing
> 3-node setup with replica-3 gluster storage, but found no usable
> solution yet.
> 
> From what I read, it's not wise to create a replica-4 gluster
> storage, because of bandwith overhead.
> 
> Is there a safe way to do this and still have 4 equal oVirt nodes?
> 
> Thanks in advance!
> 
> Regards,
> 
> Bertjan
> 
> 
> --
> 
> De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
> uitsluitend bestemd voor de geadresseerde. Indien u dit bericht
> onterecht
> ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender
> direct
> te informeren door het bericht te retourneren. Het Universitair 
> Medisch
> Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van
> de W.H.W.
> (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat
> geregistreerd bij
> de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
> 
> Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> 
> 
> --
> 
> This message may contain confidential information and is intended
> exclusively
> for the addressee. If you receive this message unintentionally, please
> do not
> use the contents but notify the sender immediately by return e-mail.
> University
> Medical Center Utrecht is a legal person by public law and is
> registered at
> the Chamber of Commerce for Midden-Nederland under no. 30244197.
> 
> Please consider the environment before printing this e-mail.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> 
> --
> Davide Ferrari
> Senior Systems Engineer
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Davide Ferrari
Oh, thanks! Reinstalling all the luster from scratch right now to get it
right. If I run again in the problem I described before I will open another
thread and attach the relevant logs

2016-09-23 16:28 GMT+02:00 Sahina Bose :

>
>
> On Fri, Sep 23, 2016 at 7:54 PM, Davide Ferrari 
> wrote:
>
>> Reading the glusterfs docs
>>
>> https://gluster.readthedocs.io/en/latest/Administrator%20Gui
>> de/arbiter-volumes-and-quorum/
>>
>> "In a replica 3 volume, client-quorum is enabled by default and set to
>> 'auto'. This means 2 bricks need to be up for the writes to succeed. Here
>> is how this configuration prevents files from ending up in split-brain:"
>>
>> So this means that if one of the machines with the 2 bricks (arbiter &
>> normal) fails, the otherbrick will be set RO, or am I missing something?
>> I mean, this config will be better in case of a network loss, and thus a
>> split brain, but it's far worse in case of a machine failing or being
>> rebooted for maintenance.
>>
>
> See the updated vol create command - you should set it up such that 2
> bricks in a sub-volume are not from the same host, thus you avoid the
> problem you describe above
>
>
>>
>>
>> 2016-09-23 16:11 GMT+02:00 Davide Ferrari :
>>
>>>
>>>
>>> 2016-09-23 15:57 GMT+02:00 Sahina Bose :
>>>

 You could do this - where Node3 & Node 2 also has arbiter bricks.
 Arbiter bricks only store metadata and requires very low storage capacity
 compared to the data bricks.

 Node1  Node2 Node3Node4
 brick1   brick1  arb-brick
 arb-brick  brick1brick1

>>>
>>> Ok, cool! And this won't pose any problem if Node2 or Node4 fail?
>>>
>>> The syntax shuld be this:
>>>
>>> gluster volume create data replica 3 arbiter 1 node1:/brick node2:/brick
>>> node2:/arb_brick node3:/brick node4:/brick node4:/arb_brick
>>>
>>> is not a problem having more than a brick on the same host for the
>>> volume create syntax?
>>>
>>> Thanks again
>>>
>>> --
>>> Davide Ferrari
>>> Senior Systems Engineer
>>>
>>
>>
>>
>> --
>> Davide Ferrari
>> Senior Systems Engineer
>>
>
>


-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Sahina Bose
On Fri, Sep 23, 2016 at 7:54 PM, Davide Ferrari  wrote:

> Reading the glusterfs docs
>
> https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/arbiter-volumes-and-quorum/
>
> "In a replica 3 volume, client-quorum is enabled by default and set to
> 'auto'. This means 2 bricks need to be up for the writes to succeed. Here
> is how this configuration prevents files from ending up in split-brain:"
>
> So this means that if one of the machines with the 2 bricks (arbiter &
> normal) fails, the otherbrick will be set RO, or am I missing something?
> I mean, this config will be better in case of a network loss, and thus a
> split brain, but it's far worse in case of a machine failing or being
> rebooted for maintenance.
>

See the updated vol create command - you should set it up such that 2
bricks in a sub-volume are not from the same host, thus you avoid the
problem you describe above


>
>
> 2016-09-23 16:11 GMT+02:00 Davide Ferrari :
>
>>
>>
>> 2016-09-23 15:57 GMT+02:00 Sahina Bose :
>>
>>>
>>> You could do this - where Node3 & Node 2 also has arbiter bricks.
>>> Arbiter bricks only store metadata and requires very low storage capacity
>>> compared to the data bricks.
>>>
>>> Node1  Node2 Node3Node4
>>> brick1   brick1  arb-brick
>>> arb-brick  brick1brick1
>>>
>>
>> Ok, cool! And this won't pose any problem if Node2 or Node4 fail?
>>
>> The syntax shuld be this:
>>
>> gluster volume create data replica 3 arbiter 1 node1:/brick node2:/brick
>> node2:/arb_brick node3:/brick node4:/brick node4:/arb_brick
>>
>> is not a problem having more than a brick on the same host for the volume
>> create syntax?
>>
>> Thanks again
>>
>> --
>> Davide Ferrari
>> Senior Systems Engineer
>>
>
>
>
> --
> Davide Ferrari
> Senior Systems Engineer
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Sahina Bose
On Fri, Sep 23, 2016 at 7:41 PM, Davide Ferrari  wrote:

>
>
> 2016-09-23 15:57 GMT+02:00 Sahina Bose :
>
>>
>> You could do this - where Node3 & Node 2 also has arbiter bricks. Arbiter
>> bricks only store metadata and requires very low storage capacity compared
>> to the data bricks.
>>
>> Node1  Node2 Node3Node4
>> brick1   brick1  arb-brick
>> arb-brick  brick1brick1
>>
>
> Ok, cool! And this won't pose any problem if Node2 or Node4 fail?
>

No.

>
> The syntax shuld be this:
>
> gluster volume create data replica 3 arbiter 1 node1:/brick node2:/brick
> node2:/arb_brick node3:/brick node4:/brick node4:/arb_brick
>

Correction: arb_bricks should be from a different node as below:

gluster volume create data replica 3 arbiter 1 node1:/brick node2:/brick
node3:/arb_brick node3:/brick node4:/brick node2:/arb_brick



> is not a problem having more than a brick on the same host for the volume
> create syntax?
>
> Thanks again
>
> --
> Davide Ferrari
> Senior Systems Engineer
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Davide Ferrari
Reading the glusterfs docs

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/

"In a replica 3 volume, client-quorum is enabled by default and set to
'auto'. This means 2 bricks need to be up for the writes to succeed. Here
is how this configuration prevents files from ending up in split-brain:"

So this means that if one of the machines with the 2 bricks (arbiter &
normal) fails, the otherbrick will be set RO, or am I missing something?
I mean, this config will be better in case of a network loss, and thus a
split brain, but it's far worse in case of a machine failing or being
rebooted for maintenance.


2016-09-23 16:11 GMT+02:00 Davide Ferrari :

>
>
> 2016-09-23 15:57 GMT+02:00 Sahina Bose :
>
>>
>> You could do this - where Node3 & Node 2 also has arbiter bricks. Arbiter
>> bricks only store metadata and requires very low storage capacity compared
>> to the data bricks.
>>
>> Node1  Node2 Node3Node4
>> brick1   brick1  arb-brick
>> arb-brick  brick1brick1
>>
>
> Ok, cool! And this won't pose any problem if Node2 or Node4 fail?
>
> The syntax shuld be this:
>
> gluster volume create data replica 3 arbiter 1 node1:/brick node2:/brick
> node2:/arb_brick node3:/brick node4:/brick node4:/arb_brick
>
> is not a problem having more than a brick on the same host for the volume
> create syntax?
>
> Thanks again
>
> --
> Davide Ferrari
> Senior Systems Engineer
>



-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Davide Ferrari
2016-09-23 15:57 GMT+02:00 Sahina Bose :

>
> You could do this - where Node3 & Node 2 also has arbiter bricks. Arbiter
> bricks only store metadata and requires very low storage capacity compared
> to the data bricks.
>
> Node1  Node2 Node3Node4
> brick1   brick1  arb-brick
> arb-brick  brick1brick1
>

Ok, cool! And this won't pose any problem if Node2 or Node4 fail?

The syntax shuld be this:

gluster volume create data replica 3 arbiter 1 node1:/brick node2:/brick
node2:/arb_brick node3:/brick node4:/brick node4:/arb_brick

is not a problem having more than a brick on the same host for the volume
create syntax?

Thanks again

-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Sahina Bose
On Fri, Sep 23, 2016 at 7:09 PM, Davide Ferrari  wrote:

>
>
> 2016-09-23 13:50 GMT+02:00 Sahina Bose :
>
> Ok, if I encounter again similiar problems I will post logs here
>
>
>>
>> If you have additional capacity on the other 3 hosts , then yes, you can
>> create a new gluster volume with a brick on the newly added 4th node and
>> bricks from other nodes - this volume can be used as another storage
>> domain. You are not doing anything wrong :) Keep in mind that all gluster
>> volumes used as data storage domains should be replica 3 or replica 3
>> -arbiter to avoid split-brain and data loss issues.
>>
>>
> Mmmmh this is ringing an alarm bell then. So, it's basically impossible
> (or at least not supported) a 4 hosts configuration with all the 4 hosts
> having a data domain in a replica 2 fashion? Is it only replica 3 arbiter 1
> the supported HA configuration? So if I want to expand storage (a part from
> adding disks to the same machines) I must add machines 3 by 3 ?
>
> Currently I have 4 machines with 4 disks each in a RAID-10 configuration,
> exposed as one brick. Which is the best HA solution in this scenario then?
>

You could do this - where Node3 & Node 2 also has arbiter bricks. Arbiter
bricks only store metadata and requires very low storage capacity compared
to the data bricks.

Node1  Node2 Node3Node4
brick1   brick1  arb-brick
arb-brick  brick1brick1



> Thanks
>
> --
> Davide Ferrari
> Senior Systems Engineer
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Davide Ferrari
2016-09-23 13:50 GMT+02:00 Sahina Bose :

Ok, if I encounter again similiar problems I will post logs here


>
> If you have additional capacity on the other 3 hosts , then yes, you can
> create a new gluster volume with a brick on the newly added 4th node and
> bricks from other nodes - this volume can be used as another storage
> domain. You are not doing anything wrong :) Keep in mind that all gluster
> volumes used as data storage domains should be replica 3 or replica 3
> -arbiter to avoid split-brain and data loss issues.
>
>
Mmmmh this is ringing an alarm bell then. So, it's basically impossible (or
at least not supported) a 4 hosts configuration with all the 4 hosts having
a data domain in a replica 2 fashion? Is it only replica 3 arbiter 1 the
supported HA configuration? So if I want to expand storage (a part from
adding disks to the same machines) I must add machines 3 by 3 ?

Currently I have 4 machines with 4 disks each in a RAID-10 configuration,
exposed as one brick. Which is the best HA solution in this scenario then?

Thanks

-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Sahina Bose
On Fri, Sep 23, 2016 at 5:02 PM, Davide Ferrari  wrote:

>
> 2016-09-23 13:17 GMT+02:00 Sahina Bose :
>
>>
>> What are the stability issues you're facing? Data volume if used as a
>> data storage domain should be a replica 3 volume as well.
>>
>
> Basically that after the first host installation+deploy (from the CLI),
> after I enable the gluster management in the cluster, I have to manually
> restart vdsmd on host1 to be able to install the other hosts. But maybe I
> should just wait more time for vdmsd catch up with everything, I don't know.
>

Once gluster management is enabled on cluster, we have not noticed the need
to restart vdsm to install other hosts. There was an issue that hosts would
not be identified as gluster hosts unless it was activated again. This will
be fixed in the next 4.0 release as the patches have been merged already.

If you encounter issue again, could you post the hosted-engine deploy logs
from the 2nd host?


>
> Then I have some other problem like a ghost VM stuck on one host  after
> moving the host to maintenance and the VM (the hosted-engine, the only one
> running in the whole cluster) being correctly migrated to another host,
> solved only by a manual reboot of the whole host (and consequent HE fencing
> of the host). I must say that that particular host is giving ECC correction
> errors in one DIMM, so maybe it could just be an HW related problem.
>
>
vdsm and engine logs would help here


>
>>
>
>>> Deploy the hosted-engine on the first VM (with the engine volume) froom
>>> the CLI, then log in Ovirt admin, enable gluster support, install *and
>>> deploy* from the GUI host2 and host3 (where the engine bricks are) and then
>>> install host4 without deploying. This should get you the 4 hosts online,
>>> but the engine will run only on the first 3
>>>
>>
>> Right. You can add the 4th node to the cluster, but not have any bricks
>> on this volume in which case VMs will be run on this node but will access
>> data from the other 3 nodes.
>>
>
> Well, actually I *do* have data bricks on the 4th host, it's just the
> engine volume that's not present there (but that host is not HE eligible
> anyway). Am I doing something wrong?
>
>
If you have additional capacity on the other 3 hosts , then yes, you can
create a new gluster volume with a brick on the newly added 4th node and
bricks from other nodes - this volume can be used as another storage
domain. You are not doing anything wrong :) Keep in mind that all gluster
volumes used as data storage domains should be replica 3 or replica 3
-arbiter to avoid split-brain and data loss issues.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Davide Ferrari
2016-09-23 13:17 GMT+02:00 Sahina Bose :

>
> What are the stability issues you're facing? Data volume if used as a data
> storage domain should be a replica 3 volume as well.
>

Basically that after the first host installation+deploy (from the CLI),
after I enable the gluster management in the cluster, I have to manually
restart vdsmd on host1 to be able to install the other hosts. But maybe I
should just wait more time for vdmsd catch up with everything, I don't know.

Then I have some other problem like a ghost VM stuck on one host  after
moving the host to maintenance and the VM (the hosted-engine, the only one
running in the whole cluster) being correctly migrated to another host,
solved only by a manual reboot of the whole host (and consequent HE fencing
of the host). I must say that that particular host is giving ECC correction
errors in one DIMM, so maybe it could just be an HW related problem.


>

>> Deploy the hosted-engine on the first VM (with the engine volume) froom
>> the CLI, then log in Ovirt admin, enable gluster support, install *and
>> deploy* from the GUI host2 and host3 (where the engine bricks are) and then
>> install host4 without deploying. This should get you the 4 hosts online,
>> but the engine will run only on the first 3
>>
>
> Right. You can add the 4th node to the cluster, but not have any bricks on
> this volume in which case VMs will be run on this node but will access data
> from the other 3 nodes.
>

Well, actually I *do* have data bricks on the 4th host, it's just the
engine volume that's not present there (but that host is not HE eligible
anyway). Am I doing something wrong?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Sahina Bose
On Fri, Sep 23, 2016 at 4:14 PM, Davide Ferrari  wrote:

> I'm struggling with the same problem (I say struggling because I'm still
> having stability issues for what i consider a stable cluster) but you can:
> - create a replica 3 engine gluster volume
> - create replica 2 data, iso and export volumes
>

What are the stability issues you're facing? Data volume if used as a data
storage domain should be a replica 3 volume as well.


>
> Deploy the hosted-engine on the first VM (with the engine volume) froom
> the CLI, then log in Ovirt admin, enable gluster support, install *and
> deploy* from the GUI host2 and host3 (where the engine bricks are) and then
> install host4 without deploying. This should get you the 4 hosts online,
> but the engine will run only on the first 3
>

Right. You can add the 4th node to the cluster, but not have any bricks on
this volume in which case VMs will be run on this node but will access data
from the other 3 nodes.


>
> 2016-09-23 11:14 GMT+02:00 Goorkate, B.J. :
>
>> Dear all,
>>
>> I've tried to find a way to add a 4th oVirt-node to my existing
>> 3-node setup with replica-3 gluster storage, but found no usable
>> solution yet.
>>
>> From what I read, it's not wise to create a replica-4 gluster
>> storage, because of bandwith overhead.
>>
>> Is there a safe way to do this and still have 4 equal oVirt nodes?
>>
>> Thanks in advance!
>>
>> Regards,
>>
>> Bertjan
>>
>> 
>> --
>>
>> De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
>> uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
>> ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender
>> direct
>> te informeren door het bericht te retourneren. Het Universitair Medisch
>> Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de
>> W.H.W.
>> (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat
>> geregistreerd bij
>> de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
>>
>> Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
>>
>> 
>> --
>>
>> This message may contain confidential information and is intended
>> exclusively
>> for the addressee. If you receive this message unintentionally, please do
>> not
>> use the contents but notify the sender immediately by return e-mail.
>> University
>> Medical Center Utrecht is a legal person by public law and is registered
>> at
>> the Chamber of Commerce for Midden-Nederland under no. 30244197.
>>
>> Please consider the environment before printing this e-mail.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
> Davide Ferrari
> Senior Systems Engineer
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Davide Ferrari
I'm struggling with the same problem (I say struggling because I'm still
having stability issues for what i consider a stable cluster) but you can:
- create a replica 3 engine gluster volume
- create replica 2 data, iso and export volumes

Deploy the hosted-engine on the first VM (with the engine volume) froom the
CLI, then log in Ovirt admin, enable gluster support, install *and deploy*
from the GUI host2 and host3 (where the engine bricks are) and then install
host4 without deploying. This should get you the 4 hosts online, but the
engine will run only on the first 3

2016-09-23 11:14 GMT+02:00 Goorkate, B.J. :

> Dear all,
>
> I've tried to find a way to add a 4th oVirt-node to my existing
> 3-node setup with replica-3 gluster storage, but found no usable
> solution yet.
>
> From what I read, it's not wise to create a replica-4 gluster
> storage, because of bandwith overhead.
>
> Is there a safe way to do this and still have 4 equal oVirt nodes?
>
> Thanks in advance!
>
> Regards,
>
> Bertjan
>
> 
> --
>
> De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
> uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
> ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender
> direct
> te informeren door het bericht te retourneren. Het Universitair Medisch
> Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de
> W.H.W.
> (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd
> bij
> de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
>
> Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
>
> 
> --
>
> This message may contain confidential information and is intended
> exclusively
> for the addressee. If you receive this message unintentionally, please do
> not
> use the contents but notify the sender immediately by return e-mail.
> University
> Medical Center Utrecht is a legal person by public law and is registered at
> the Chamber of Commerce for Midden-Nederland under no. 30244197.
>
> Please consider the environment before printing this e-mail.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Goorkate, B.J.
Dear all,

I've tried to find a way to add a 4th oVirt-node to my existing 
3-node setup with replica-3 gluster storage, but found no usable
solution yet.

>From what I read, it's not wise to create a replica-4 gluster
storage, because of bandwith overhead. 

Is there a safe way to do this and still have 4 equal oVirt nodes?

Thanks in advance!

Regards,

Bertjan

--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users