Thanks Strahil,
it worked like a charm.
On 7/31/21 5:21 PM, Strahil Nikolov wrote:
You most probably already have an /etc/fstab entry, so just recreate the LV,
recreate the FS (mkfs.xfs -i size=512 /path/to/lv) and mount it.
For source brick and new brick , just use 'hydra4:/gluster1/data'
You most probably already have an /etc/fstab entry, so just recreate the LV,
recreate the FS (mkfs.xfs -i size=512 /path/to/lv) and mount it.
For source brick and new brick , just use 'hydra4:/gluster1/data' .
Don't forget to test on non-prod first ;)
Best Regards,
Strahil Nikolov
В
Thanks Strahil,
I have a couple of questions.
My /gluster1 is mounted, but no "data" folder, I suppose that will be
created when I reset the brick. I see two versions of the reset-brick
operations:
gluster reset-brick start
gluster reset-brickcommit
I assume I should use the
What is the 'gluster volume healinfo summary' output ?
Best Regards,Strahil Nikolov
On Fri, Jul 30, 2021 at 23:44, Valerio Luccio wrote:
Hello all,
I have a gluster (v. 5.13) on 4 CentOS 7.8 nodes. I recently had hardware
problems on the RAIDs. I was able to get it back, but I
Thanks Strahil,
Head and tail of the info command:
gluster volume heal MRIData info
Brick hydra1:/gluster1/data
Status: Connected
Number of entries: 0
Brick hydra1:/gluster2/data
Status: Connected
Number of entries: 0
Brick hydra1:/arbiter/1
Status: Connected
Strahil,
did some more digging into the heal info output.
Found also the following:
[...]
Brick hydra4:/gluster1/data
Status: Transport endpoint is not connected
Number of entries: -
Hello all,
I have a gluster (v. 5.13) on 4 CentOS 7.8 nodes. I recently had
hardware problems on the RAIDs. I was able to get it back, but I noticed
some odd things, so I did a "gluster volume heal info" and found a ton
of errors. When I tried to do "gluster volume heal" I got the message: