Good day,
The distributed volume was created manually and currently i'm thinking to
create a replica on the two new servers which 1 server will hold 2 bricks
and replace it later, then recreate the brick for the server hosting 2
bricks into 1,
i found the image location
/gluster_bricks/data/data/19cdda62-da1c-4821-9e27-2b2585ededff/images and
plan to create a backup on a cold storage
but not sure how to transfer it to a new instance of engine


On Mon, May 10, 2021 at 11:50 AM Strahil Nikolov <[email protected]>
wrote:

> I don't see how the data will be lost.
> The only risk I see is to add the 2 new hosts in the Gluster TSP (Cluster)
> and then stop them for some reason (like maintenance).You will loose quorum
> (in this hypothetical scenario) and thus all storage will be unavailable.
>
> Overall the processis:
> 1. Prepare your HW Raid (unless you are using NVMEs -> JBOD) and note down
> the stripe size and the ammount of data disks (Raid 10 -> split into half,
> raid 5 -> -1disk due to parity)
> 2. Add the new device in lvm filter
> 3. 'pvcreate' with allignment parameters
> 4. vgcreate
> 5. Thinpool LV creation with relevant chunk size (between 1MB and 2 MB
> based on stripe size of HW raid * data disks)
> 6. 'lvcreate'
> 7. XFS creation (again alignment is needed + inode parameter set to 512)
> 8. Mount the brick (if using SELINUX you can use the mount option context=
> system_u:object_r:sglusterd_t:s0 ) with noatime/relatime
> Don't forget to add in /etc/fstab or create the relevant systemd '.mount'
> unit
> 9. Add the node in the TSP
> From the first node: gluster peer probe <new_node>
> 10. When you add the 2 new hosts and their bricks are ready to be used
> gluster volume add-brick <VOLUME_NAME> replica 3 host2:/path/to/brick
> host3:/path to brick
>
> 11. Wait for the healing to be done
>
> Some sources:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/brick_configuration
>
>
> Best Regards,
> Strahil Nikolov
>
> On Sun, May 9, 2021 at 7:04, Ernest Clyde Chua
> <[email protected]> wrote:
> _______________________________________________
> Users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/[email protected]/message/M76UNA6EBWSWZVWJWVUTNAY3FX6CNFVT/
>
>
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/DCFY76FCMYD5G3HAJWCXN3ZS5A4UOEEB/

Reply via email to