[ovirt-users] Re: Reconfigure Gluster from Replica 3 to Arbiter 1/Replica 2

2021-07-20 Thread Strahil Nikolov via Users
remove-brick replica 1 will deconfigure your Arbiter which leads to brick prep 
work on the Arbiter - just more efforts.
>Also, you suggested creating a fresh logical volume on the node where I'm 
>converting from full replica to arbiter.Would it not suffice to simply go in 
>and erase all of the data?
I can't catch the context properly, but keep in mind that gluster uses extended 
fs attributes which means that either you have to clean them up, or just 'mkfs 
xfs -i 512' the whole LV which for me is far simpler. As you will reuse the 
disks in the other nodes - you most probably will use a new device.
Gluster arbiter is always the 3rd entry in the subvolume (3,6,9,etc).
By default inodes are a percentage of the free space, and on small disks (you 
don't want 10TB raid for an arbiter brick, right?) the ammount of inodes will 
be smaller on the arbiter brick.Example (numbers are made up):Host1 - 10TB disk 
- 1 inodesHost2 - 10TB disk - 1 inodesArbiter - 100G disk - 100 inodes
Once you hit the Arbiter inode limit, you can't use it any more.
Reusing old bricks is not nice and will prevent you from future nightmares.Just 
remove the 3rd brick, add new disk on 3rd host, create it fresh , new arbiter, 
stop gluster. on 1 host, rebuild raid, reset-brick, heal, the other host too.

Best Regards,Strahil Nikolov 
 
  On Tue, Jul 20, 2021 at 2:48, David White via Users wrote:   
Thank you.
I'm doing some more research & reading on this to make sure I understand 
everything before I do this work.

You wrote:
> If you rebuild the raid, you are destroying the brick, so after mounting it 
> back, you will need to reset-brick. If it doesn't work for some reason , you 
> can always remove-brick replica 1 host1:/path/to/brick arbiter:/path/to/brick 
> and readd them with add-brick replica 3 arbiter 1.

Would it be safer to remove-brick replica 1 before I destroy the brick?

Also, you suggested creating a fresh logical volume on the node where I'm 
converting from full replica to arbiter.
Would it not suffice to simply go in and erase all of the data?

I'm still not clear on how to force gluster to use a specific server as the 
arbiter node.
Will gluster just "figure it out" if the logical volume on the arbiter is 
smaller than on the other two nodes?

I'm also still a little bit unclear on why I need to specifically increase the 
inode when I go to add-brick on the arbiter node. Or is that when (if?) I 
rebuild the Logical Volume?
Do I need to increase the inode on the other two servers after I grow the RAID 
on the two primary storage servers?

Below is an overview of the specific steps I think I need to take, in order:

Prepare:
Put cluster into global maintenance mode
Run the following commands:
# gluster volume status data inode

host 3 (Arbiter):
# gluster volume remove-brick data replica 2 
host3.mgt.example.com:/gluster_bricks/data/data
# rm -rf /gluster_bricks/data/data/*
# ALTERNATIVELY... rebuild the logical volume? Is this really necessary?
# gluster volume add-brick data replica 3 arbiter 1 
host3.mgt.example.com:/gluster_bricks/data/data

(Let the volumes heal completely) 


host 1
# gluster volume remove-brick data replica 1 
host1.mgt.example.com:/gluster_bricks/data/data
(rebuild the array & reboot the server)
# gluster volume add-brick data replica 3 arbiter 1 
host1.mgt.example.com:/gluster_bricks/data/data

(Let the volumes heal completely)

host 2
# gluster volume remove-brick data replica 1 
host1.mgt.example.com:/gluster_bricks/data/data
(rebuild the array & reboot the server)
# gluster volume add-brick data replica 3 arbiter 1 
host1.mgt.example.com:/gluster_bricks/data/data

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Sunday, July 11th, 2021 at 3:14 AM, Strahil Nikolov via Users 
 wrote:

> > 2a) remove-brick replica 2
> 

> -- if I understand correctly, this will basically just reconfigure the 
> existing volume to replicate between the 2 bricks, and not all 3 ... is this 
> correct?
> 

> Yep, you are kicking out the 3rd node and the volume is converted to replica 
> 2.
> 

> Most probably the command would be gluster volume remove-brick vmstore 
> replica 2 host3:/path/to/brick force
> 

> > 2b) add-brick replicat 3 arbiter 1
> 

> -- If I understand correctly, this will reconfigure the volume (again), 
> adding the 3rd server's storage back to the Gluster volume, but only as an 
> arbiter node, correct?
> 

> Yes,I would prefer to create a fresh new LV.Don't forget to raise the inode 
> count higher,as this one will be an arbiter brick (see previous e-mail).
> 

> Once you add via gluster volume add-brick vmstore replica 3 arbiter 1 
> host3:/path/to/new/brick , you will have to wait for all heals to complete
> 

>  
> 

> > 3.  Now with everything healthy, the volume is now a Replica 2 / Arbiter 
> > 1 and I can now stop gluster on each of the 2 servers getting the 
> > storage upgrade, rebuild the RAID on the new storage, reboot, and let 
> > gluster 

[ovirt-users] Re: Reconfigure Gluster from Replica 3 to Arbiter 1/Replica 2

2021-07-19 Thread David White via Users
Thank you.
I'm doing some more research & reading on this to make sure I understand 
everything before I do this work.

You wrote:
> If you rebuild the raid, you are destroying the brick, so after mounting it 
> back, you will need to reset-brick. If it doesn't work for some reason , you 
> can always remove-brick replica 1 host1:/path/to/brick arbiter:/path/to/brick 
> and readd them with add-brick replica 3 arbiter 1.

Would it be safer to remove-brick replica 1 before I destroy the brick?

Also, you suggested creating a fresh logical volume on the node where I'm 
converting from full replica to arbiter.
Would it not suffice to simply go in and erase all of the data?

I'm still not clear on how to force gluster to use a specific server as the 
arbiter node.
Will gluster just "figure it out" if the logical volume on the arbiter is 
smaller than on the other two nodes?

I'm also still a little bit unclear on why I need to specifically increase the 
inode when I go to add-brick on the arbiter node. Or is that when (if?) I 
rebuild the Logical Volume?
Do I need to increase the inode on the other two servers after I grow the RAID 
on the two primary storage servers?

Below is an overview of the specific steps I think I need to take, in order:

Prepare:
Put cluster into global maintenance mode
Run the following commands:
# gluster volume status data inode

host 3 (Arbiter):
# gluster volume remove-brick data replica 2 
host3.mgt.example.com:/gluster_bricks/data/data
# rm -rf /gluster_bricks/data/data/*
# ALTERNATIVELY... rebuild the logical volume? Is this really necessary?
# gluster volume add-brick data replica 3 arbiter 1 
host3.mgt.example.com:/gluster_bricks/data/data

(Let the volumes heal completely) 


host 1
# gluster volume remove-brick data replica 1 
host1.mgt.example.com:/gluster_bricks/data/data
(rebuild the array & reboot the server)
# gluster volume add-brick data replica 3 arbiter 1 
host1.mgt.example.com:/gluster_bricks/data/data

(Let the volumes heal completely)

host 2
# gluster volume remove-brick data replica 1 
host1.mgt.example.com:/gluster_bricks/data/data
(rebuild the array & reboot the server)
# gluster volume add-brick data replica 3 arbiter 1 
host1.mgt.example.com:/gluster_bricks/data/data

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Sunday, July 11th, 2021 at 3:14 AM, Strahil Nikolov via Users 
 wrote:

> > 2a) remove-brick replica 2
> 

> -- if I understand correctly, this will basically just reconfigure the 
> existing volume to replicate between the 2 bricks, and not all 3 ... is this 
> correct?
> 

> Yep, you are kicking out the 3rd node and the volume is converted to replica 
> 2.
> 

> Most probably the command would be gluster volume remove-brick vmstore 
> replica 2 host3:/path/to/brick force
> 

> > 2b) add-brick replicat 3 arbiter 1
> 

> -- If I understand correctly, this will reconfigure the volume (again), 
> adding the 3rd server's storage back to the Gluster volume, but only as an 
> arbiter node, correct?
> 

> Yes,I would prefer to create a fresh new LV.Don't forget to raise the inode 
> count higher,as this one will be an arbiter brick (see previous e-mail).
> 

> Once you add via gluster volume add-brick vmstore replica 3 arbiter 1 
> host3:/path/to/new/brick , you will have to wait for all heals to complete
> 

>  
> 

> > 3.  Now with everything healthy, the volume is now a Replica 2 / Arbiter 
> > 1 and I can now stop gluster on each of the 2 servers getting the 
> > storage upgrade, rebuild the RAID on the new storage, reboot, and let 
> > gluster heal itself before moving on to the next server.
> 

> If you rebuild the raid, you are destroying the brick, so after mounting it 
> back, you will need to reset-brick. If it doesn't work for some reason , you 
> can always remove-brick replica 1 host1:/path/to/brick arbiter:/path/to/brick 
> and readd them with add-brick replica 3 arbiter 1.
> 

> I had some paused VMs after raid reshaping (spinning disks) during the 
> healing but my lab is running on workstations, so do it in the least busiest 
> hours and possible backups should have completed before the reconfiguration 
> and not exactly during the healing ;)
> 

> Best Regards,
> 

> Strahil Nikolov
> 

> Users mailing list -- users@ovirt.org
> 

> To unsubscribe send an email to users-le...@ovirt.org
> 

> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> 

> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> 

> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECHZQWBMC5H3CDWV67TCLFZMBPGTVGCU/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt 

[ovirt-users] Re: Reconfigure Gluster from Replica 3 to Arbiter 1/Replica 2

2021-07-11 Thread Strahil Nikolov via Users
>2a) remove-brick replica 2
-- if I understand correctly, this will basically just reconfigure the existing 
volume to replicate between the 2 bricks, and not all 3 ... is this correct?

Yep, you are kicking out the 3rd node and the volume is converted to replica 2.
Most probably the command would be gluster volume remove-brick vmstore replica 
2 host3:/path/to/brick force

>2b) add-brick replicat 3 arbiter 1
-- If I understand correctly, this will reconfigure the volume (again), adding 
the 3rd server's storage back to the Gluster volume, but only as an arbiter 
node, correct?
Yes,I would prefer to create a fresh new LV.Don't forget to raise the inode 
count higher,as this one will be an arbiter brick (see previous e-mail).
Once you add via gluster volume add-brick vmstore replica 3 arbiter 1 
host3:/path/to/new/brick , you will have to wait for all heals to complete
 


>3) Now with everything healthy, the volume is now a Replica 2 / Arbiter 1 
>and I can now stop gluster on each of the 2 servers getting the storage 
>upgrade, rebuild the RAID on the new storage, reboot, and let gluster heal 
>itself before moving on to the next server.

If you rebuild the raid, you are destroying the brick, so after mounting it 
back, you will need to reset-brick. If it doesn't work for some reason , you 
can always remove-brick replica 1 host1:/path/to/brick arbiter:/path/to/brick 
and readd them with add-brick replica 3 arbiter 1.

I had some paused VMs after raid reshaping (spinning disks) during the healing 
but my lab is running on workstations, so do it in the least busiest hours and 
possible backups should have completed before the reconfiguration and not 
exactly during the healing ;)


Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECHZQWBMC5H3CDWV67TCLFZMBPGTVGCU/


[ovirt-users] Re: Reconfigure Gluster from Replica 3 to Arbiter 1/Replica 2

2021-07-10 Thread David White via Users
HI Jayme & Strahil,
Thank you again for your messages.

Reading 
https://stackoverflow.com/questions/52394849/can-i-change-glusterfs-replica-3-to-replica-3-with-arbiter-1,
 I think I understand now what Strahil is suggesting.

It sounds to me like you're saying I can reconfigure the existing gluster 
volume (don't destroy it) by changing it from a Replica 3 over to a Replica 
2/Arbiter 1.

Then once I do that operation, I can THEN go in and put the two hosts into 
maintenance mode one at a time to rebuild the RAID arrays, and then heal the 
volume on the larger RAID.

So it would look like this:
1) Put 3rd host into maintenance mode & verify no heals are necessary
2) Make the 3rd host the arbiter
2a) remove-brick replica 2
-- if I understand correctly, this will basically just reconfigure the existing 
volume to replicate between the 2 bricks, and not all 3 ... is this correct?

2b) add-brick replicat 3 arbiter 1
-- If I understand correctly, this will reconfigure the volume (again), adding 
the 3rd server's storage back to the Gluster volume, but only as an arbiter 
node, correct?

3) Now with everything healthy, the volume is now a Replica 2 / Arbiter 1 
and I can now stop gluster on each of the 2 servers getting the storage 
upgrade, rebuild the RAID on the new storage, reboot, and let gluster heal 
itself before moving on to the next server.

Do I understand this process right?

Jayme's idea to use my 4th server as a temporary NFS location is not a bad 
idea. I could definitely do that, "just in case" the Gluster volume got 
corrupted. Unfortunately my backup server has spinning disks instead of SSD 
drives, but speed wouldn't be too noticeable, and I could do it over a weekend 
or something. 

Thanks again for your input.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Saturday, July 10th, 2021 at 6:53 AM, Jayme  wrote:

> Just a thought but depending on resources you might be able to use your 4th 
> server as nfs storage and live migrate vm disks to it and off of your gluster 
> volumes. I’ve done this in the past when doing major maintenance on gluster 
> volumes to err on the side of caution. 
> 

> On Sat, Jul 10, 2021 at 7:22 AM David White via Users  wrote:
> 

> > Hmm right as I said that, I just had a thought.
> > I DO have a "backup" server in place (that I haven't even started using 
> > yet), that currently has some empty hard drive bays.
> > 

> > It would take some extra work, but I could use that 4th backup server as a 
> > temporary staging ground to begin building the new Gluster configuration. 
> > Once I have that server + 2 of my production servers rebuilt properly, I 
> > could then simply remove and replace this "backup" server with my 3rd 
> > server in the cluster.
> > 

> > So this effectively means that I have 2 servers that I can take down 
> > completely at a single time to rebuild gluster, instead of just 1. I think 
> > that simplifies things.
> > 

> > Sent with ProtonMail Secure Email.
> > 

> > ‐‐‐ Original Message ‐‐‐
> > 

> > On Saturday, July 10th, 2021 at 6:14 AM, David White 
> >  wrote:
> > 

> > > Thank you. And yes, I agree, this needs to occur in a maintenance window 
> > > and be done very carefully. :)
> > > 

> > > My only problem with this method is that I need to *replace* disks in the 
> > > two servers.
> > > I don't have any empty hard drive bays, so will effectively need to put a 
> > > host into maintenance mode, remove the drives, and put new drives in.
> > > 

> > > I will NOT be touching the OS drives, however, as those are on their own 
> > > separate RAID array. 
> > > 

> > > So, essentially, it will need to look something like this:
> > > 

> > > -   Put the cluster into global maintenance mode
> > > -   Put 1 host into full maintenance mode / deactivate it
> > > -   Stop gluster
> > > -   Remove the storage
> > > -   Add the new storage & reconfigure
> > > -   Start gluster
> > > -   Re-add the host to the cluster
> > > 

> > > Adding the new storage & reconfiguring is the head scratcher for me, 
> > > given that I don't have room for the old hard drives + new hard drives at 
> > > the same time.
> > > 

> > > Sent with ProtonMail Secure Email.
> > > 

> > > ‐‐‐ Original Message ‐‐‐
> > > 

> > > On Saturday, July 10th, 2021 at 5:55 AM, Strahil Nikolov 
> > >  wrote:
> > > 

> > > > Hi David,
> > > > 

> > > > any storage operation can cause unexpected situations, so always plan 
> > > > your activites for low traffic hours and test them on your test 
> > > > environment in advance.
> > > > 

> > > > I think it's easier if you (command line):
> > > > 

> > > > -verify no heals are pending. Not a single one.- set the host to 
> > > > maintenance over ovirt- remove the third node from gluster volumes 
> > > > (remove-brick replica 2)- umount the bricks on the third node- recreate 
> > > > a smaller LV with '-i maxpct=90 size=512' and mount it with the same 
> > > > options like the rest of the 

[ovirt-users] Re: Reconfigure Gluster from Replica 3 to Arbiter 1/Replica 2

2021-07-10 Thread Jayme
Just a thought but depending on resources you might be able to use your 4th
server as nfs storage and live migrate vm disks to it and off of your
gluster volumes. I’ve done this in the past when doing major maintenance on
gluster volumes to err on the side of caution.

On Sat, Jul 10, 2021 at 7:22 AM David White via Users 
wrote:

> Hmm right as I said that, I just had a thought.
> I DO have a "backup" server in place (that I haven't even started using
> yet), that currently has some empty hard drive bays.
>
> It would take some extra work, but I could use that 4th backup server as a
> temporary staging ground to begin building the new Gluster configuration.
> Once I have that server + 2 of my production servers rebuilt properly, I
> could then simply remove and replace this "backup" server with my 3rd
> server in the cluster.
>
> So this effectively means that I have 2 servers that I can take down
> completely at a single time to rebuild gluster, instead of just 1. I think
> that simplifies things.
>
> Sent with ProtonMail  Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
>
> On Saturday, July 10th, 2021 at 6:14 AM, David White <
> dmwhite...@protonmail.com> wrote:
>
> Thank you. And yes, I agree, this needs to occur in a maintenance window
> and be done very carefully. :)
>
> My only problem with this method is that I need to *replace* disks in the
> two servers.
> I don't have any empty hard drive bays, so will effectively need to put a
> host into maintenance mode, remove the drives, and put new drives in.
>
> I will NOT be touching the OS drives, however, as those are on their own
> separate RAID array.
>
> So, essentially, it will need to look something like this:
>
>- Put the cluster into global maintenance mode
>- Put 1 host into full maintenance mode / deactivate it
>- Stop gluster
>- Remove the storage
>- Add the new storage & reconfigure
>- Start gluster
>- Re-add the host to the cluster
>
> Adding the new storage & reconfiguring is the head scratcher for me, given
> that I don't have room for the old hard drives + new hard drives at the
> same time.
>
> Sent with ProtonMail  Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Saturday, July 10th, 2021 at 5:55 AM, Strahil Nikolov <
> hunter86...@yahoo.com> wrote:
>
> Hi David,
>
>
> any storage operation can cause unexpected situations, so always plan your
> activites for low traffic hours and test them on your test environment in
> advance.
>
> I think it's easier if you (command line):
>
> -verify no heals are pending. Not a single one.
> - set the host to maintenance over ovirt
> - remove the third node from gluster volumes (remove-brick replica 2)
> - umount the bricks on the third node
> - recreate a smaller LV with '-i maxpct=90 size=512' and mount it with the
> same options like the rest of the nodes. Usually I use
> 'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'
> - add this new brick (add-brick replica 3 arbiter 1) to the volume
> - wait for the heals to finish
>
> Then repeat again for each volume.
>
>
> Adding the new disks should be done later.
>
>
> Best Regards,
> Strahil Nikolov
>
> On Sat, Jul 10, 2021 at 3:15, David White via Users
>  wrote:
> My current hyperconverged environment is replicating data across all 3
> servers.
> I'm running critically low on disk space, and need to add space.
>
> To that end, I've ordered 8x 800GB ssd drives, and plan to put 4 drives in
> 1 server, and 4 drives in the other.
>
> What's my best option for reconfiguring the hyperconverged cluster, to
> change gluster storage away from Replica 3 to a Replica 2 / Arbiter model?
> I'd really prefer not to have to reinstall things from scratch, but I'll
> do that if I have to.
>
> My most important requirement is that I cannot have any downtime for my
> VMs (so I can only reconfigure 1 host at a time).
>
> Sent with ProtonMail  Secure Email.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHEAXRUNA2RDJRYE74AOHND2QKLM3TAU/
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WWETWL7N7FT5UCYO4N3OXUY3YQUUVFNJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Re: Reconfigure Gluster from Replica 3 to Arbiter 1/Replica 2

2021-07-10 Thread David White via Users
Hmm right as I said that, I just had a thought.
I DO have a "backup" server in place (that I haven't even started using yet), 
that currently has some empty hard drive bays.

It would take some extra work, but I could use that 4th backup server as a 
temporary staging ground to begin building the new Gluster configuration. Once 
I have that server + 2 of my production servers rebuilt properly, I could then 
simply remove and replace this "backup" server with my 3rd server in the 
cluster.

So this effectively means that I have 2 servers that I can take down completely 
at a single time to rebuild gluster, instead of just 1. I think that simplifies 
things.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Saturday, July 10th, 2021 at 6:14 AM, David White 
 wrote:

> Thank you. And yes, I agree, this needs to occur in a maintenance window and 
> be done very carefully. :)
> 

> My only problem with this method is that I need to *replace* disks in the two 
> servers.
> I don't have any empty hard drive bays, so will effectively need to put a 
> host into maintenance mode, remove the drives, and put new drives in.
> 

> I will NOT be touching the OS drives, however, as those are on their own 
> separate RAID array. 
> 

> So, essentially, it will need to look something like this:
> 

> -   Put the cluster into global maintenance mode
> -   Put 1 host into full maintenance mode / deactivate it
> -   Stop gluster
> -   Remove the storage
> -   Add the new storage & reconfigure
> -   Start gluster
> -   Re-add the host to the cluster
> 

> Adding the new storage & reconfiguring is the head scratcher for me, given 
> that I don't have room for the old hard drives + new hard drives at the same 
> time.
> 

> Sent with ProtonMail Secure Email.
> 

> ‐‐‐ Original Message ‐‐‐
> 

> On Saturday, July 10th, 2021 at 5:55 AM, Strahil Nikolov 
>  wrote:
> 

> > Hi David,
> > 

> > any storage operation can cause unexpected situations, so always plan your 
> > activites for low traffic hours and test them on your test environment in 
> > advance.
> > 

> > I think it's easier if you (command line):
> > 

> > -verify no heals are pending. Not a single one.- set the host to 
> > maintenance over ovirt- remove the third node from gluster volumes 
> > (remove-brick replica 2)- umount the bricks on the third node- recreate a 
> > smaller LV with '-i maxpct=90 size=512' and mount it with the same options 
> > like the rest of the nodes. Usually I use 
> > 'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'- add this 
> > new brick (add-brick replica 3 arbiter 1) to the volume- wait for the heals 
> > to finish
> > 

> > Then repeat again for each volume.
> > 

> > Adding the new disks should be done later.
> > 

> > Best Regards,Strahil Nikolov
> > 

> > > On Sat, Jul 10, 2021 at 3:15, David White via Users 
> > > wrote:My current hyperconverged environment is replicating data across 
> > > all 3 servers.
> > > I'm running critically low on disk space, and need to add space.
> > > 

> > > To that end, I've ordered 8x 800GB ssd drives, and plan to put 4 drives 
> > > in 1 server, and 4 drives in the other.
> > > 

> > > What's my best option for reconfiguring the hyperconverged cluster, to 
> > > change gluster storage away from Replica 3 to a Replica 2 / Arbiter model?
> > > I'd really prefer not to have to reinstall things from scratch, but I'll 
> > > do that if I have to.
> > > 

> > > My most important requirement is that I cannot have any downtime for my 
> > > VMs (so I can only reconfigure 1 host at a time).
> > > 

> > > Sent with ProtonMail Secure Email.
> > > 

> > > ___
> > > 

> > > Users mailing list -- users@ovirt.org
> > > 

> > > To unsubscribe send an email to users-le...@ovirt.org
> > > 

> > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > 

> > > oVirt Code of Conduct: 
> > > https://www.ovirt.org/community/about/community-guidelines/
> > > 

> > > List Archives: 
> > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHEAXRUNA2RDJRYE74AOHND2QKLM3TAU/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WWETWL7N7FT5UCYO4N3OXUY3YQUUVFNJ/


[ovirt-users] Re: Reconfigure Gluster from Replica 3 to Arbiter 1/Replica 2

2021-07-10 Thread David White via Users
Thank you. And yes, I agree, this needs to occur in a maintenance window and be 
done very carefully. :)

My only problem with this method is that I need to *replace* disks in the two 
servers.
I don't have any empty hard drive bays, so will effectively need to put a host 
into maintenance mode, remove the drives, and put new drives in.

I will NOT be touching the OS drives, however, as those are on their own 
separate RAID array. 

So, essentially, it will need to look something like this:

-   Put the cluster into global maintenance mode
-   Put 1 host into full maintenance mode / deactivate it
-   Stop gluster
-   Remove the storage
-   Add the new storage & reconfigure
-   Start gluster
-   Re-add the host to the cluster

Adding the new storage & reconfiguring is the head scratcher for me, given that 
I don't have room for the old hard drives + new hard drives at the same time.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Saturday, July 10th, 2021 at 5:55 AM, Strahil Nikolov 
 wrote:

> Hi David,
> 

> any storage operation can cause unexpected situations, so always plan your 
> activites for low traffic hours and test them on your test environment in 
> advance.
> 

> I think it's easier if you (command line):
> 

> -verify no heals are pending. Not a single one.- set the host to maintenance 
> over ovirt- remove the third node from gluster volumes (remove-brick replica 
> 2)- umount the bricks on the third node- recreate a smaller LV with '-i 
> maxpct=90 size=512' and mount it with the same options like the rest of the 
> nodes. Usually I use 
> 'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'- add this new 
> brick (add-brick replica 3 arbiter 1) to the volume- wait for the heals to 
> finish
> 

> Then repeat again for each volume.
> 

> Adding the new disks should be done later.
> 

> Best Regards,Strahil Nikolov
> 

> > On Sat, Jul 10, 2021 at 3:15, David White via Users 
> > wrote:My current hyperconverged environment is replicating data across all 
> > 3 servers.
> > I'm running critically low on disk space, and need to add space.
> > 

> > To that end, I've ordered 8x 800GB ssd drives, and plan to put 4 drives in 
> > 1 server, and 4 drives in the other.
> > 

> > What's my best option for reconfiguring the hyperconverged cluster, to 
> > change gluster storage away from Replica 3 to a Replica 2 / Arbiter model?
> > I'd really prefer not to have to reinstall things from scratch, but I'll do 
> > that if I have to.
> > 

> > My most important requirement is that I cannot have any downtime for my VMs 
> > (so I can only reconfigure 1 host at a time).
> > 

> > Sent with ProtonMail Secure Email.
> > 

> > ___
> > 

> > Users mailing list -- users@ovirt.org
> > 

> > To unsubscribe send an email to users-le...@ovirt.org
> > 

> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 

> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > 

> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHEAXRUNA2RDJRYE74AOHND2QKLM3TAU/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PFMIS2NCPECKJIQUYWUOO2FZSWF2PVFO/


[ovirt-users] Re: Reconfigure Gluster from Replica 3 to Arbiter 1/Replica 2

2021-07-10 Thread Strahil Nikolov via Users
Hi David,

any storage operation can cause unexpected situations, so always plan your 
activites for low traffic hours and test them on your test environment in 
advance.
I think it's easier if you (command line):
-verify no heals are pending. Not a single one.- set the host to maintenance 
over ovirt- remove the third node from gluster volumes (remove-brick replica 
2)- umount the bricks on the third node- recreate a smaller LV with '-i 
maxpct=90 size=512' and mount it with the same options like the rest of the 
nodes. Usually I use 
'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'- add this new 
brick (add-brick replica 3 arbiter 1) to the volume- wait for the heals to 
finish
Then repeat again for each volume.

Adding the new disks should be done later.

Best Regards,Strahil Nikolov 
 
  On Sat, Jul 10, 2021 at 3:15, David White via Users wrote:   
My current hyperconverged environment is replicating data across all 3 servers.
I'm running critically low on disk space, and need to add space.

To that end, I've ordered 8x 800GB ssd drives, and plan to put 4 drives in 1 
server, and 4 drives in the other.

What's my best option for reconfiguring the hyperconverged cluster, to change 
gluster storage away from Replica 3 to a Replica 2 / Arbiter model?
I'd really prefer not to have to reinstall things from scratch, but I'll do 
that if I have to.

My most important requirement is that I cannot have any downtime for my VMs (so 
I can only reconfigure 1 host at a time).

Sent with ProtonMail Secure Email.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHEAXRUNA2RDJRYE74AOHND2QKLM3TAU/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KZFQ3GNUCONSDVXZJNDFUWOT5244GABS/