Yes. The arbiter is useless with only 1 data brick,as it contains only medatata
and if you have 1 full brick (data + metadata) and 1 arbiter (only metadata) in
the volume, you could end in a metadata split brain (risk is the same like
replica 2 volumes) .
Once you have the new data brick , you
Hello,
Many thanks for your reply.
So not only did I need to remove the broken brick
(office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick), I should have also remove
the active arbiter (office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick)?
Thanks again,
Gilboa
On Fri, Jul 22, 2022 at 10:09 PM Strahil Nikolov
There is no need to stop the volume, the operation can be done online.
gluster volume remove-brick GV2Data repica 1
office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick
office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick force
replica 1 -> remaining copies ( only Brick 2 remains)
When you got a replacement brick
Hello,
$ gluster volume info GV2Data
Volume Name: GV2Data
Type: Replicate
Volume ID: c1946fc2-ed94-4b9f-9da3-f0f1ee90f303
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick <-- This is the
dead
The remove-brick expects the bricks that have to be removed.Yet, you specified
1 brick, so 2 are left in the volume -> yet you specified 'replica 1'.Define
both the data brick and the arbiter brick.
Best Regards,Strahil Nikolov
On Wed, Jul 20, 2022 at 13:34, Gilboa Davara wrote:
Hello,
Tried it:
*$ gluster volume remove-brick GV2Data replica 1
office-wx-hv1-lab-gfs:/mnt/**LogGFSData/brick force*
Running remove-brick with cluster.force-migration enabled can result in
data corruption. It is safer to disable this option so that files that
receive writes during migration are
Replacing a dead brick in a 2+1 scenario (2 data + 1 arbiter brick) requires to
reduce the replica to 1 by removing the dead brick and the arbiter.Use the
force option as you are not using a distributed-replicated volume .
Best Regards,Strahil Nikolov
On Mon, Jul 18, 2022 at 11:36,
If I'm understanding your question / setup correctly, the best way
would be to simply mount -o bind the old path to the new one. The old
path would still be used by gluster, but it would ultimately go to the
new location.
Changing the brick path on a single brick while leaving the original
path
Hello,
Many thanks for your email.
I should add that this is a test environment we set up in preparation for a
planned CentOS 7 / oVirt 4.3 upgrade to CentOS 8 Streams / oVirt 4.5
upgrade in one of our old(er) oVirt clusters.
In this case, we blew up the software RAID during the OS replacement
What you are missing is the fact that gluster requires more than one
set of bricks to recover from a dead host. I.e. In your set up, you'd
need 6 hosts. 4x replicas and 2x arbiters with at least one set (2x
replicas and 1x arbiter) operational bare minimum.
Automated commands to fix the volume do
10 matches
Mail list logo