Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-07-03 Thread yayo (j)
2017-07-03 15:42 GMT+02:00 knarra :

> So, please poweroff your vms while performing this.


Thank you,

Ok, no problem, cluster is not (yet) in production

Thank you again!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-07-03 Thread knarra

On 07/03/2017 06:58 PM, knarra wrote:

On 07/03/2017 06:53 PM, yayo (j) wrote:

Hi,

And sorry for delay

2017-06-30 14:09 GMT+02:00 knarra >:


To add a fully replicated node  you need to reduce the replica
count to 2 and add new brick to the volume so that it becomes
replica 3. Reducing replica count by removing a brick from
replica / arbiter cannot be done from UI currently and this has
to be done using gluster CLI.
 AFAIR, there was an issue where vm's were going to paused state
when reducing the replica count and increasing it to 3. Not sure
if this still holds good with the latest release.

Any specific reason why you want to move to full replication
instead of using an arbiter node ?


We have a new server with the same hard disk size of other two node, 
so, why not? Why join the cluster as an arbiter when we can have the 
same disk capacity to add extra replication?




and remove the arbiter node (Also a way to move the arbiter role
to the new node, If needed)

To move arbiter role to a new node you can move the node to
maintenance , add  new node and replace  old brick with new
brick. You can follow the steps below to do that.

  * Move the node to be replaced into Maintenance mode
  * Prepare the replacement node
  * Prepare bricks on that node.
  * Create replacement brick directories
  * Ensure the new directories are owned by the vdsm user and the
kvm group.
  * # mkdir /rhgs/bricks/engine
  * # chmod vdsm:kvm /rhgs/bricks/engine
  * # mkdir /rhgs/bricks/data
  * # chmod vdsm:kvm /rhgs/bricks/data
  * Run the following command from one of the healthy cluster
members:
  * # gluster peer probe 
  * add the new host to the cluster.
  * Add new host address to gluster network
  * Click Network Interfaces sub-tab.
  * Click Set up Host Networks.
  * Drag and drop the glusternw network onto the IP address of
the new host.
  * Click OK
  * Replace the old brick with the brick on the new host
  * Click the Bricks sub-tab.
  * Verify that brick heal completes successfully.
  * In the Hosts tab, right-click on the old host and click Remove.
  * Clean old host metadata
  * # hosted-engine --clean-metadata --host-id=
--force-clean



I need this (reads: I need the arbiter role) if I reduce replica 
count then I add the new node as full replica and increasing replica 
count again to 3? (As you expained above)



Above steps hold good if you want to move the arbiter role to a new node.

If you want to move to full replica, reducing the replica count will 
work fine but increasing it again back to 3 might cause vm pause issues.

So, please poweroff your vms while performing this.






Thank you





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-07-03 Thread knarra

On 07/03/2017 06:53 PM, yayo (j) wrote:

Hi,

And sorry for delay

2017-06-30 14:09 GMT+02:00 knarra >:


To add a fully replicated node  you need to reduce the replica
count to 2 and add new brick to the volume so that it becomes
replica 3. Reducing replica count by removing a brick from replica
/ arbiter cannot be done from UI currently and this has to be done
using gluster CLI.
 AFAIR, there was an issue where vm's were going to paused state
when reducing the replica count and increasing it to 3. Not sure
if this still holds good with the latest release.

Any specific reason why you want to move to full replication
instead of using an arbiter node ?


We have a new server with the same hard disk size of other two node, 
so, why not? Why join the cluster as an arbiter when we can have the 
same disk capacity to add extra replication?




and remove the arbiter node (Also a way to move the arbiter role
to the new node, If needed)

To move arbiter role to a new node you can move the node to
maintenance , add  new node and replace  old brick with new brick.
You can follow the steps below to do that.

  * Move the node to be replaced into Maintenance mode
  * Prepare the replacement node
  * Prepare bricks on that node.
  * Create replacement brick directories
  * Ensure the new directories are owned by the vdsm user and the
kvm group.
  * # mkdir /rhgs/bricks/engine
  * # chmod vdsm:kvm /rhgs/bricks/engine
  * # mkdir /rhgs/bricks/data
  * # chmod vdsm:kvm /rhgs/bricks/data
  * Run the following command from one of the healthy cluster members:
  * # gluster peer probe 
  * add the new host to the cluster.
  * Add new host address to gluster network
  * Click Network Interfaces sub-tab.
  * Click Set up Host Networks.
  * Drag and drop the glusternw network onto the IP address of the
new host.
  * Click OK
  * Replace the old brick with the brick on the new host
  * Click the Bricks sub-tab.
  * Verify that brick heal completes successfully.
  * In the Hosts tab, right-click on the old host and click Remove.
  * Clean old host metadata
  * # hosted-engine --clean-metadata --host-id=
--force-clean



I need this (reads: I need the arbiter role) if I reduce replica count 
then I add the new node as full replica and increasing replica count 
again to 3? (As you expained above)



Above steps hold good if you want to move the arbiter role to a new node.

If you want to move to full replica, reducing the replica count will 
work fine but increasing it again back to 3 might cause vm pause issues.






Thank you



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-07-03 Thread yayo (j)
Hi,

And sorry for delay

2017-06-30 14:09 GMT+02:00 knarra :

> To add a fully replicated node  you need to reduce the replica count to 2
> and add new brick to the volume so that it becomes replica 3. Reducing
> replica count by removing a brick from replica / arbiter cannot be done
> from UI currently and this has to be done using gluster CLI.
>  AFAIR, there was an issue where vm's were going to paused state when
> reducing the replica count and increasing it to 3. Not sure if this still
> holds good with the latest release.
>
> Any specific reason why you want to move to full replication instead of
> using an arbiter node ?
>
>
We have a new server with the same hard disk size of other two node, so,
why not? Why join the cluster as an arbiter when we can have the same disk
capacity to add extra replication?



>
> and remove the arbiter node (Also a way to move the arbiter role to the
> new node, If needed)
>
> To move arbiter role to a new node you can move the node to maintenance ,
> add  new node and replace  old brick with new brick. You can follow the
> steps below to do that.
>
>
>- Move the node to be replaced into Maintenance mode
>- Prepare the replacement node
>- Prepare bricks on that node.
>- Create replacement brick directories
>- Ensure the new directories are owned by the vdsm user and the kvm
>group.
>- # mkdir /rhgs/bricks/engine
>- # chmod vdsm:kvm /rhgs/bricks/engine
>- # mkdir /rhgs/bricks/data
>- # chmod vdsm:kvm /rhgs/bricks/data
>- Run the following command from one of the healthy cluster members:
>- # gluster peer probe 
>-  add the new host to the cluster.
>- Add new host address to gluster network
>- Click Network Interfaces sub-tab.
>- Click Set up Host Networks.
>- Drag and drop the glusternw network onto the IP address of the new
>host.
>- Click OK
>- Replace the old brick with the brick on the new host
>- Click the Bricks sub-tab.
>- Verify that brick heal completes successfully.
>- In the Hosts tab, right-click on the old host and click Remove.
>- Clean old host metadata
>- # hosted-engine --clean-metadata --host-id=
>--force-clean
>
>
>
I need this (reads: I need the arbiter role) if I reduce replica count then
I add the new node as full replica and increasing replica count again to 3?
(As you expained above)

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread knarra

On 06/30/2017 04:53 PM, yayo (j) wrote:


2017-06-30 12:54 GMT+02:00 yayo (j) >:


The actual arbiter must be removed because is too obsolete. So, I
needs to add the new "full replicated" node but I want to know
what are the steps for add a new "full replicated" node and remove
the arbiter node (Also a way to move the arbiter role to the new
node, If needed) . Extra info: I want to know if I can do this on
an existing ovirt gluster Data Domain (called Data01) because we
have many vm runnig on it.


Hi,

I have found this doc from RH about replacing host in a gluster env: 
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Replacing_Hosts.html


I can use the command described at point 7 ?


# *gluster volume replace-brick vol sys0.example.com:/rhs/brick1/b1 
sys5.example.com:/rhs/brick1/b1 commit force*

*volume replace-brick: success: replace-brick commit successful*


The question is: The replaced node will be a data node (a "full 
replicated" node) or will be again an arbiter?

It will be an arbiter again.


Thank you



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread knarra

On 06/30/2017 04:24 PM, yayo (j) wrote:


2017-06-30 11:01 GMT+02:00 knarra >:


You do not need to remove the arbiter node as you are getting the
advantage of saving on space by having this config.

Since you have a new you can add this as fourth node and create
another gluster volume (replica 3) out of this node plus the other
two nodes and run vm images there as well.


Hi,

And thanks for the answer. The actual arbiter must be removed because 
is too obsolete. So, I needs to add the new "full replicated" node but 
I want to know what are the steps for add a new "full replicated" node
To add a fully replicated node  you need to reduce the replica count to 
2 and add new brick to the volume so that it becomes replica 3. Reducing 
replica count by removing a brick from replica / arbiter cannot be done 
from UI currently and this has to be done using gluster CLI.
 AFAIR, there was an issue where vm's were going to paused state when 
reducing the replica count and increasing it to 3. Not sure if this 
still holds good with the latest release.


Any specific reason why you want to move to full replication instead of 
using an arbiter node ?




and remove the arbiter node (Also a way to move the arbiter role to 
the new node, If needed)
To move arbiter role to a new node you can move the node to maintenance 
, add  new node and replace  old brick with new brick. You can follow 
the steps below to do that.


 * Move the node to be replaced into Maintenance mode
 * Prepare the replacement node
 * Prepare bricks on that node.
 * Create replacement brick directories
 * Ensure the new directories are owned by the vdsm user and the kvm group.
 * # mkdir /rhgs/bricks/engine
 * # chmod vdsm:kvm /rhgs/bricks/engine
 * # mkdir /rhgs/bricks/data
 * # chmod vdsm:kvm /rhgs/bricks/data
 * Run the following command from one of the healthy cluster members:
 * # gluster peer probe 
 *   add the new host to the cluster.
 * Add new host address to gluster network
 * Click Network Interfaces sub-tab.
 * Click Set up Host Networks.
 * Drag and drop the glusternw network onto the IP address of the new host.
 * Click OK
 * Replace the old brick with the brick on the new host
 * Click the Bricks sub-tab.
 * Verify that brick heal completes successfully.
 * In the Hosts tab, right-click on the old host and click Remove.
 * Clean old host metadata
 * # hosted-engine --clean-metadata --host-id= --force-clean



. Extra info: I want to know if I can do this on an existing ovirt 
gluster Data Domain (called Data01) because we have many vm runnig on it.
When you move your node to maintenance all the vms running on that node 
will be migrated to another node and since you have two nodes up and 
running there should not be any problem.


thank you


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread yayo (j)
2017-06-30 12:54 GMT+02:00 yayo (j) :

> The actual arbiter must be removed because is too obsolete. So, I needs to
> add the new "full replicated" node but I want to know what are the steps
> for add a new "full replicated" node and remove the arbiter node (Also a
> way to move the arbiter role to the new node, If needed) . Extra info: I
> want to know if I can do this on an existing ovirt gluster Data Domain
> (called Data01) because we have many vm runnig on it.
>

Hi,

I have found this doc from RH about replacing host in a gluster env:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Replacing_Hosts.html

I can use the command described at point 7 ?


# *gluster volume replace-brick vol sys0.example.com:/rhs/brick1/b1
sys5.example.com:/rhs/brick1/b1 commit force*
*volume replace-brick: success: replace-brick commit successful*


The question is: The replaced node will be a data node (a "full replicated"
node) or will be again an arbiter?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread yayo (j)
2017-06-30 11:01 GMT+02:00 knarra :

> You do not need to remove the arbiter node as you are getting the
> advantage of saving on space by having this config.
>
> Since you have a new you can add this as fourth node and create another
> gluster volume (replica 3) out of this node plus the other two nodes and
> run vm images there as well.
>

Hi,

And thanks for the answer. The actual arbiter must be removed because is
too obsolete. So, I needs to add the new "full replicated" node but I want
to know what are the steps for add a new "full replicated" node and remove
the arbiter node (Also a way to move the arbiter role to the new node, If
needed) . Extra info: I want to know if I can do this on an existing ovirt
gluster Data Domain (called Data01) because we have many vm runnig on it.

thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread knarra

On 06/30/2017 02:18 PM, yayo (j) wrote:

Hi at all,

we have a 3 node cluster with this configuration:

ovirtzz 4.1 with 3 node hyperconverged with gluster. 2 node are "full 
replicated" and 1 node is the arbiter.


Now we have a new server to add to cluster then we want to add this 
new server and remove the arbiter (or, make this new server a "full 
replicated" gluster with arbiter role? I don't know)
You do not need to remove the arbiter node as you are getting the 
advantage of saving on space by having this config.


Since you have a new you can add this as fourth node and create another 
gluster volume (replica 3) out of this node plus the other two nodes and 
run vm images there as well.


Can you please help me to know what is the right way to do this? Or, 
Can you give me any doc or link that explain the steps to do this?


Thank you in advance!



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users