Thank you. And yes, I agree, this needs to occur in a maintenance window and be 
done very carefully. :)

My only problem with this method is that I need to *replace* disks in the two 
servers.
I don't have any empty hard drive bays, so will effectively need to put a host 
into maintenance mode, remove the drives, and put new drives in.

I will NOT be touching the OS drives, however, as those are on their own 
separate RAID array. 

So, essentially, it will need to look something like this:

-   Put the cluster into global maintenance mode
-   Put 1 host into full maintenance mode / deactivate it
-   Stop gluster
-   Remove the storage
-   Add the new storage & reconfigure
-   Start gluster
-   Re-add the host to the cluster

Adding the new storage & reconfiguring is the head scratcher for me, given that 
I don't have room for the old hard drives + new hard drives at the same time.

Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

On Saturday, July 10th, 2021 at 5:55 AM, Strahil Nikolov 
<[email protected]> wrote:

> Hi David,
> 

> any storage operation can cause unexpected situations, so always plan your 
> activites for low traffic hours and test them on your test environment in 
> advance.
> 

> I think it's easier if you (command line):
> 

> -verify no heals are pending. Not a single one.- set the host to maintenance 
> over ovirt- remove the third node from gluster volumes (remove-brick replica 
> 2)- umount the bricks on the third node- recreate a smaller LV with '-i 
> maxpct=90 size=512' and mount it with the same options like the rest of the 
> nodes. Usually I use 
> 'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'- add this new 
> brick (add-brick replica 3 arbiter 1) to the volume- wait for the heals to 
> finish
> 

> Then repeat again for each volume.
> 

> Adding the new disks should be done later.
> 

> Best Regards,Strahil Nikolov
> 

> > On Sat, Jul 10, 2021 at 3:15, David White via Users<[email protected]> 
> > wrote:My current hyperconverged environment is replicating data across all 
> > 3 servers.
> > I'm running critically low on disk space, and need to add space.
> > 

> > To that end, I've ordered 8x 800GB ssd drives, and plan to put 4 drives in 
> > 1 server, and 4 drives in the other.
> > 

> > What's my best option for reconfiguring the hyperconverged cluster, to 
> > change gluster storage away from Replica 3 to a Replica 2 / Arbiter model?
> > I'd really prefer not to have to reinstall things from scratch, but I'll do 
> > that if I have to.
> > 

> > My most important requirement is that I cannot have any downtime for my VMs 
> > (so I can only reconfigure 1 host at a time).
> > 

> > Sent with ProtonMail Secure Email.
> > 

> > _______________________________________________
> > 

> > Users mailing list -- [email protected]
> > 

> > To unsubscribe send an email to [email protected]
> > 

> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 

> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > 

> > List Archives: 
> > https://lists.ovirt.org/archives/list/[email protected]/message/YHEAXRUNA2RDJRYE74AOHND2QKLM3TAU/

Attachment: publickey - [email protected] - 0x320CD582.asc
Description: application/pgp-keys

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/PFMIS2NCPECKJIQUYWUOO2FZSWF2PVFO/

Reply via email to