In your case it seems reasonable, but you should test the 2 stripe sizes (128K 
vs 256K) before running in production. The good thing about replica volumes is 
that you can remove a brick , recreate it from cli and then add it back.

Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 17:03:11 Гринуич+3, C Williams 
<cwilliams3...@gmail.com> написа: 





Hello,

I am planning to use (6) 1 TB disks in a hardware RAID 6 array which will yield 
4 data disks per oVirt Gluster brick.

However. 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
 ---  states 

"For RAID 6, the stripe unit size must be chosen such that the full stripe size 
(stripe unit * number of data disks) is between 1 MiB and 2 MiB, preferably in 
the lower end of the range. Hardware RAID controllers usually allow stripe unit 
sizes that are a power of 2. For RAID 6 with 12 disks (10 data disks), the 
recommended stripe unit size is 128KiB"

In my case,  each  oVirt Gluster Brick would have 4 data disks which would mean 
a 256K hardware RAID stripe size ( by what I see above)  to get the full stripe 
size unit to above 1 MiB  ( 4 data disks X 256K Hardware RAID stripe = 1024 K )

Could I use a smaller stripe size ex. 128K -- since most of the oVirt virtual 
disk traffic will be small sharded files and a 256K stripe would seem to me to 
be pretty big.for this type of data ?

Thanks For Your Help !!





On Tue, Sep 29, 2020 at 8:22 PM C Williams <cwilliams3...@gmail.com> wrote:
> Strahil,
> 
> Once Again Thank You For Your Help !  
> 
> On Tue, Sep 29, 2020 at 11:29 AM Strahil Nikolov <hunter86...@yahoo.com> 
> wrote:
>> One important step is to align the XFS to the stripe size * stripe width. 
>> Don't miss it or you might have issues.
>> 
>> Details can be found at: 
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams 
>> <cwilliams3...@gmail.com> написа: 
>> 
>> 
>> 
>> 
>> 
>> Hello,
>> 
>> We have decided to get a 6th server for the install. I hope to set up a 2x3 
>> Distributed replica 3 .
>> 
>> So we are not going to worry about the "5 server" situation.
>> 
>> Thank You All For Your Help !!
>> 
>> On Mon, Sep 28, 2020 at 5:53 PM C Williams <cwilliams3...@gmail.com> wrote:
>>> Hello,
>>> 
>>> More questions on this -- since I have 5 servers . Could the following work 
>>> ?   Each server has (1) 3TB RAID 6 partition that I want to use for 
>>> contiguous storage.
>>> 
>>> Mountpoint for RAID 6 partition (3TB)  /brick  
>>> 
>>> Server A: VOL1 - Brick 1                                  directory  
>>> /brick/brick1 (VOL1 Data brick)                                             
>>>                                                         
>>> Server B: VOL1 - Brick 2 + VOL2 - Brick 3      directory  /brick/brick2 
>>> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
>>> Server C: VOL1 - Brick 3                                 directory 
>>> /brick/brick3  (VOL1 Data brick)  
>>> Server D: VOL2 - Brick 1                                 directory 
>>> /brick/brick1  (VOL2 Data brick)
>>> Server E  VOL2 - Brick 2                                 directory 
>>> /brick/brick2  (VOL2 Data brick)
>>> 
>>> Questions about this configuration 
>>> 1.  Is it safe to use a  mount point 2 times ?  
>>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
>>>  says "Ensure that no more than one brick is created from a single mount." 
>>> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
>>> 
>>> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2 
>>> additional data bricks plus 1 arbitrator brick (VOL2)  to create a  
>>> distributed-replicate cluster providing ~6TB of contiguous storage ?  . 
>>>      By contiguous storage I mean that df -h would show ~6 TB disk space.
>>> 
>>> Thank You For Your Help !!
>>> 
>>> On Mon, Sep 28, 2020 at 4:04 PM C Williams <cwilliams3...@gmail.com> wrote:
>>>> Strahil,
>>>> 
>>>> Thank You For Your Help !
>>>> 
>>>> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov <hunter86...@yahoo.com> 
>>>> wrote:
>>>>> You can setup your bricks in such way , that each host has at least 1 
>>>>> brick.
>>>>> For example:
>>>>> Server A: VOL1 - Brick 1
>>>>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
>>>>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
>>>>> Server D: VOL2 - brick 1
>>>>> 
>>>>> The most optimal is to find a small system/VM for being an arbiter and 
>>>>> having a 'replica 3 arbiter 1' volume.
>>>>> 
>>>>> Best Regards,
>>>>> Strahil Nikolov
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme 
>>>>> <jay...@gmail.com> написа: 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> It might be possible to do something similar as described in the 
>>>>> documentation here: 
>>>>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
>>>>>  -- but I'm not sure if oVirt HCI would support it. You might have to 
>>>>> roll out your own GlusterFS storage solution. Someone with more 
>>>>> Gluster/HCI knowledge might know better.
>>>>> 
>>>>> On Mon, Sep 28, 2020 at 1:26 PM C Williams <cwilliams3...@gmail.com> 
>>>>> wrote:
>>>>>> Jayme,
>>>>>> 
>>>>>> Thank for getting back with me ! 
>>>>>> 
>>>>>> If I wanted to be wasteful with storage, could I start with an initial 
>>>>>> replica 2 + arbiter and then add 2 bricks to the volume ? Could the 
>>>>>> arbiter solve split-brains for 4 bricks ?
>>>>>> 
>>>>>> Thank You For Your Help !
>>>>>> 
>>>>>> On Mon, Sep 28, 2020 at 12:05 PM Jayme <jay...@gmail.com> wrote:
>>>>>>> You can only do HCI in multiple's of 3. You could do a 3 server HCI 
>>>>>>> setup and add the other two servers as compute nodes or you could add a 
>>>>>>> 6th server and expand HCI across all 6
>>>>>>> 
>>>>>>> On Mon, Sep 28, 2020 at 12:28 PM C Williams <cwilliams3...@gmail.com> 
>>>>>>> wrote:
>>>>>>>> Hello,
>>>>>>>> 
>>>>>>>> We recently received 5 servers. All have about 3 TB of storage. 
>>>>>>>> 
>>>>>>>> I want to deploy an oVirt HCI using as much of my storage and compute 
>>>>>>>> resources as possible. 
>>>>>>>> 
>>>>>>>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>>>>>>> 
>>>>>>>> I have deployed replica 3s and know about replica 2 + arbiter -- but 
>>>>>>>> an arbiter would not be applicable here -- since I have equal storage 
>>>>>>>> on all of the planned bricks.
>>>>>>>> 
>>>>>>>> Thank You For Your Help !!
>>>>>>>> 
>>>>>>>> C Williams
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list -- users@ovirt.org
>>>>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>>>>>> oVirt Code of Conduct: 
>>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>>> List Archives: 
>>>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> _______________________________________________
>>>>> Users mailing list -- users@ovirt.org
>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>>> oVirt Code of Conduct: 
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives: 
>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CECMD2SWBSBDAFP3TFMMYWTSV3UKU72E/
>>>>> 
>>>> 
>>> 
>> _______________________________________________
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CTSXBT6NUWVLQF6BOSL3YQD7VUQEVYJB/
>> 
> 
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PZIVJQSVTM3WMHBSK22IALVH5ZYY3EWL/

Reply via email to