remove-brick replica 1 will deconfigure your Arbiter which leads to brick prep 
work on the Arbiter - just more efforts.
>Also, you suggested creating a fresh logical volume on the node where I'm 
>converting from full replica to arbiter.Would it not suffice to simply go in 
>and erase all of the data?
I can't catch the context properly, but keep in mind that gluster uses extended 
fs attributes which means that either you have to clean them up, or just 'mkfs 
xfs -i 512' the whole LV which for me is far simpler. As you will reuse the 
disks in the other nodes - you most probably will use a new device.
Gluster arbiter is always the 3rd entry in the subvolume (3,6,9,etc).
By default inodes are a percentage of the free space, and on small disks (you 
don't want 10TB raid for an arbiter brick, right?) the ammount of inodes will 
be smaller on the arbiter brick.Example (numbers are made up):Host1 - 10TB disk 
- 10000 inodesHost2 - 10TB disk - 10000 inodesArbiter - 100G disk - 100 inodes
Once you hit the Arbiter inode limit, you can't use it any more.
Reusing old bricks is not nice and will prevent you from future nightmares.Just 
remove the 3rd brick, add new disk on 3rd host, create it fresh , new arbiter, 
stop gluster. on 1 host, rebuild raid, reset-brick, heal, the other host too.

Best Regards,Strahil Nikolov 
 
  On Tue, Jul 20, 2021 at 2:48, David White via Users<users@ovirt.org> wrote:   
Thank you.
I'm doing some more research & reading on this to make sure I understand 
everything before I do this work.

You wrote:
> If you rebuild the raid, you are destroying the brick, so after mounting it 
> back, you will need to reset-brick. If it doesn't work for some reason , you 
> can always remove-brick replica 1 host1:/path/to/brick arbiter:/path/to/brick 
> and readd them with add-brick replica 3 arbiter 1.

Would it be safer to remove-brick replica 1 before I destroy the brick?

Also, you suggested creating a fresh logical volume on the node where I'm 
converting from full replica to arbiter.
Would it not suffice to simply go in and erase all of the data?

I'm still not clear on how to force gluster to use a specific server as the 
arbiter node.
Will gluster just "figure it out" if the logical volume on the arbiter is 
smaller than on the other two nodes?

I'm also still a little bit unclear on why I need to specifically increase the 
inode when I go to add-brick on the arbiter node. Or is that when (if?) I 
rebuild the Logical Volume?
Do I need to increase the inode on the other two servers after I grow the RAID 
on the two primary storage servers?

Below is an overview of the specific steps I think I need to take, in order:

Prepare:
Put cluster into global maintenance mode
Run the following commands:
# gluster volume status data inode

host 3 (Arbiter):
# gluster volume remove-brick data replica 2 
host3.mgt.example.com:/gluster_bricks/data/data
# rm -rf /gluster_bricks/data/data/*
# ALTERNATIVELY... rebuild the logical volume? Is this really necessary?
# gluster volume add-brick data replica 3 arbiter 1 
host3.mgt.example.com:/gluster_bricks/data/data

(Let the volumes heal completely) 


host 1
# gluster volume remove-brick data replica 1 
host1.mgt.example.com:/gluster_bricks/data/data
(rebuild the array & reboot the server)
# gluster volume add-brick data replica 3 arbiter 1 
host1.mgt.example.com:/gluster_bricks/data/data

(Let the volumes heal completely)

host 2
# gluster volume remove-brick data replica 1 
host1.mgt.example.com:/gluster_bricks/data/data
(rebuild the array & reboot the server)
# gluster volume add-brick data replica 3 arbiter 1 
host1.mgt.example.com:/gluster_bricks/data/data

Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

On Sunday, July 11th, 2021 at 3:14 AM, Strahil Nikolov via Users 
<users@ovirt.org> wrote:

> > 2a) remove-brick replica 2
> 

> -- if I understand correctly, this will basically just reconfigure the 
> existing volume to replicate between the 2 bricks, and not all 3 ... is this 
> correct?
> 

> Yep, you are kicking out the 3rd node and the volume is converted to replica 
> 2.
> 

> Most probably the command would be gluster volume remove-brick vmstore 
> replica 2 host3:/path/to/brick force
> 

> > 2b) add-brick replicat 3 arbiter 1
> 

> -- If I understand correctly, this will reconfigure the volume (again), 
> adding the 3rd server's storage back to the Gluster volume, but only as an 
> arbiter node, correct?
> 

> Yes,I would prefer to create a fresh new LV.Don't forget to raise the inode 
> count higher,as this one will be an arbiter brick (see previous e-mail).
> 

> Once you add via gluster volume add-brick vmstore replica 3 arbiter 1 
> host3:/path/to/new/brick , you will have to wait for all heals to complete
> 

>  
> 

> > 3.  Now with everything healthy, the volume is now a Replica 2 / Arbiter 
> > 1.... and I can now stop gluster on each of the 2 servers getting the 
> > storage upgrade, rebuild the RAID on the new storage, reboot, and let 
> > gluster heal itself before moving on to the next server.
> 

> If you rebuild the raid, you are destroying the brick, so after mounting it 
> back, you will need to reset-brick. If it doesn't work for some reason , you 
> can always remove-brick replica 1 host1:/path/to/brick arbiter:/path/to/brick 
> and readd them with add-brick replica 3 arbiter 1.
> 

> I had some paused VMs after raid reshaping (spinning disks) during the 
> healing but my lab is running on workstations, so do it in the least busiest 
> hours and possible backups should have completed before the reconfiguration 
> and not exactly during the healing ;)
> 

> Best Regards,
> 

> Strahil Nikolov
> 

> Users mailing list -- users@ovirt.org
> 

> To unsubscribe send an email to users-le...@ovirt.org
> 

> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> 

> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> 

> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECHZQWBMC5H3CDWV67TCLFZMBPGTVGCU/
>   
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZLUVGVGZCE2HMJ47BND7QKJ5Z6KE2POJ/

Reply via email to