Since oVirt 4.4 , the stage that deploys the oVirt node/host is adding an lvm 
filter in /etc/lvm/lvm.conf which is the reason behind that.

Best Regards,
Strahil Nikolov






В петък, 25 септември 2020 г., 20:52:13 Гринуич+3, Staniforth, Paul 
<p.stanifo...@leedsbeckett.ac.uk> написа: 







Thanks,

             the gluster volume is just a test and the main reason was to test 
the upgrade of a node with gluster bricks.




I don't know why lvm doesn\t work which is what oVirt is using.




Regards,

               Paul S.











________________________________ 
From: Strahil Nikolov <hunter86...@yahoo.com>
Sent: 25 September 2020 18:28
To: Users <users@ovirt.org>; Staniforth, Paul <p.stanifo...@leedsbeckett.ac.uk>
Subject: Re: [ovirt-users] Node 4.4.1 gluster bricks 
 



Caution External Mail: Do not click any links or open any attachments unless 
you trust the sender and know that the content is safe.

>1 node I wiped it clean and the other I left the 3 gluster brick drives 
>untouch.

If the last node from the original is untouched you can:
1. Go to the old host and use 'gluster volume remove-brick <VOL> replica 1 
wiped_host:/path/to-brick untouched_bricks_host:/path/to-brick force'
2. Remove the 2 nodes that you have kicked away:
gluster peer detach node2
gluster peer detach node3

3. Reinstall the wiped node and install gluster there
4. Create the filesystem on the brick:
mkfs.xfs -i size=512 /dev/mapper/brick_block_device
5. Mount the Gluster (you can copy the fstab entry from the working node and 
adapt it)
Here is an example:
/dev/data/data1 /gluster_bricks/data1 xfs 
inode64,noatime,nodiratime,inode64,nouuid,context="system_u:object_r:glusterd_brick_t:s0"
 0 0

6. Create the selinux label via 'semanage fcontext -a -t glusterd_brick_t 
"/gluster_bricks/data1(/.*)?"' (remove only the single quotes) and run 
'restorecon -RFvv /gluster_bricks/data1'
7. Mount the FS and create a dir inside the mount point
8. Extend the gluster volume:
'gluster volume add-brick <VOL> replica 2 
new_host:/gluster_bricks/<dir>/<subdir>

9. Run a full heal
gluster volume heal <VOL> full

10. Repeat again and remember to never wipe 2 nodes at a time :)


Good luck and take a look at Quick Start Guide - Gluster Docs



Best Regards,
Strahil Nikolov


To view the terms under which this email is distributed, please go to:- 
http://leedsbeckett.ac.uk/disclaimer/email/


_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QJ6IVFVC2PAMWW57QO5S36COYONV7XM/

Reply via email to