[ovirt-users] Re: Replica Question

2020-09-30 Thread C Williams
Thank You Strahil !!

On Wed, Sep 30, 2020 at 12:21 PM Strahil Nikolov 
wrote:

> In your case it seems reasonable, but you should test the 2 stripe sizes
> (128K vs 256K) before running in production. The good thing about replica
> volumes is that you can remove a brick , recreate it from cli and then add
> it back.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В сряда, 30 септември 2020 г., 17:03:11 Гринуич+3, C Williams <
> cwilliams3...@gmail.com> написа:
>
>
>
>
>
> Hello,
>
> I am planning to use (6) 1 TB disks in a hardware RAID 6 array which will
> yield 4 data disks per oVirt Gluster brick.
>
> However.
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
> ---  states
>
> "For RAID 6, the stripe unit size must be chosen such that the full stripe
> size (stripe unit * number of data disks) is between 1 MiB and 2 MiB,
> preferably in the lower end of the range. Hardware RAID controllers usually
> allow stripe unit sizes that are a power of 2. For RAID 6 with 12 disks (10
> data disks), the recommended stripe unit size is 128KiB"
>
> In my case,  each  oVirt Gluster Brick would have 4 data disks which would
> mean a 256K hardware RAID stripe size ( by what I see above)  to get the
> full stripe size unit to above 1 MiB  ( 4 data disks X 256K Hardware RAID
> stripe = 1024 K )
>
> Could I use a smaller stripe size ex. 128K -- since most of the oVirt
> virtual disk traffic will be small sharded files and a 256K stripe would
> seem to me to be pretty big.for this type of data ?
>
> Thanks For Your Help !!
>
>
>
>
>
> On Tue, Sep 29, 2020 at 8:22 PM C Williams 
> wrote:
> > Strahil,
> >
> > Once Again Thank You For Your Help !
> >
> > On Tue, Sep 29, 2020 at 11:29 AM Strahil Nikolov 
> wrote:
> >> One important step is to align the XFS to the stripe size * stripe
> width. Don't miss it or you might have issues.
> >>
> >> Details can be found at:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >>
> >>
> >>
> >>
> >>
> >> В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams <
> cwilliams3...@gmail.com> написа:
> >>
> >>
> >>
> >>
> >>
> >> Hello,
> >>
> >> We have decided to get a 6th server for the install. I hope to set up a
> 2x3 Distributed replica 3 .
> >>
> >> So we are not going to worry about the "5 server" situation.
> >>
> >> Thank You All For Your Help !!
> >>
> >> On Mon, Sep 28, 2020 at 5:53 PM C Williams 
> wrote:
> >>> Hello,
> >>>
> >>> More questions on this -- since I have 5 servers . Could the following
> work ?   Each server has (1) 3TB RAID 6 partition that I want to use for
> contiguous storage.
> >>>
> >>> Mountpoint for RAID 6 partition (3TB)  /brick
> >>>
> >>> Server A: VOL1 - Brick 1  directory
>  /brick/brick1 (VOL1 Data brick)
>
> >>> Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory
>  /brick/brick2 (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
> >>> Server C: VOL1 - Brick 3 directory
> /brick/brick3  (VOL1 Data brick)
> >>> Server D: VOL2 - Brick 1 directory
> /brick/brick1  (VOL2 Data brick)
> >>> Server E  VOL2 - Brick 2 directory
> /brick/brick2  (VOL2 Data brick)
> >>>
> >>> Questions about this configuration
> >>> 1.  Is it safe to use a  mount point 2 times ?
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
> says "Ensure that no more than one brick is created from a single mount."
> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
> >>>
> >>> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and
> add 2 additional data bricks plus 1 arbitrator brick (VOL2)  to create a
>  distributed-replicate cluster providing ~6TB of contiguous storage ?  .
> >>>  By contiguous storage I mean that df -h would show ~6 TB disk
> space.
> >>>
> >>> Thank You For Your Help !!
> >>>
> >>> On Mon, Sep 28, 2020 at 4:04 PM C Williams 
> wrote:
>  Strahil,
> 
>  Thank You For Your Help !
> 
>  On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov <
> hunter86...@yahoo.com> wrote:
> > You can setup your bricks in such way , that each host has at least
> 1 brick.
> > For example:
> > Server A: VOL1 - Brick 1
> > Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
> > Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
> > Server D: VOL2 - brick 1
> >
> > The most optimal is to find a small system/VM for being an arbiter
> and having a 'replica 3 arbiter 1' volume.
> >
> > Best Regards,
> > Strahil Nikolov
> >
> >
> >
> >
> >
> > В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme <
> jay...@g

[ovirt-users] Re: Replica Question

2020-09-30 Thread Strahil Nikolov via Users
In your case it seems reasonable, but you should test the 2 stripe sizes (128K 
vs 256K) before running in production. The good thing about replica volumes is 
that you can remove a brick , recreate it from cli and then add it back.

Best Regards,
Strahil Nikolov






В сряда, 30 септември 2020 г., 17:03:11 Гринуич+3, C Williams 
 написа: 





Hello,

I am planning to use (6) 1 TB disks in a hardware RAID 6 array which will yield 
4 data disks per oVirt Gluster brick.

However. 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
 ---  states 

"For RAID 6, the stripe unit size must be chosen such that the full stripe size 
(stripe unit * number of data disks) is between 1 MiB and 2 MiB, preferably in 
the lower end of the range. Hardware RAID controllers usually allow stripe unit 
sizes that are a power of 2. For RAID 6 with 12 disks (10 data disks), the 
recommended stripe unit size is 128KiB"

In my case,  each  oVirt Gluster Brick would have 4 data disks which would mean 
a 256K hardware RAID stripe size ( by what I see above)  to get the full stripe 
size unit to above 1 MiB  ( 4 data disks X 256K Hardware RAID stripe = 1024 K )

Could I use a smaller stripe size ex. 128K -- since most of the oVirt virtual 
disk traffic will be small sharded files and a 256K stripe would seem to me to 
be pretty big.for this type of data ?

Thanks For Your Help !!





On Tue, Sep 29, 2020 at 8:22 PM C Williams  wrote:
> Strahil,
> 
> Once Again Thank You For Your Help !  
> 
> On Tue, Sep 29, 2020 at 11:29 AM Strahil Nikolov  
> wrote:
>> One important step is to align the XFS to the stripe size * stripe width. 
>> Don't miss it or you might have issues.
>> 
>> Details can be found at: 
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams 
>>  написа: 
>> 
>> 
>> 
>> 
>> 
>> Hello,
>> 
>> We have decided to get a 6th server for the install. I hope to set up a 2x3 
>> Distributed replica 3 .
>> 
>> So we are not going to worry about the "5 server" situation.
>> 
>> Thank You All For Your Help !!
>> 
>> On Mon, Sep 28, 2020 at 5:53 PM C Williams  wrote:
>>> Hello,
>>> 
>>> More questions on this -- since I have 5 servers . Could the following work 
>>> ?   Each server has (1) 3TB RAID 6 partition that I want to use for 
>>> contiguous storage.
>>> 
>>> Mountpoint for RAID 6 partition (3TB)  /brick  
>>> 
>>> Server A: VOL1 - Brick 1                                  directory  
>>> /brick/brick1 (VOL1 Data brick)                                             
>>>                                                         
>>> Server B: VOL1 - Brick 2 + VOL2 - Brick 3      directory  /brick/brick2 
>>> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
>>> Server C: VOL1 - Brick 3                                 directory 
>>> /brick/brick3  (VOL1 Data brick)  
>>> Server D: VOL2 - Brick 1                                 directory 
>>> /brick/brick1  (VOL2 Data brick)
>>> Server E  VOL2 - Brick 2                                 directory 
>>> /brick/brick2  (VOL2 Data brick)
>>> 
>>> Questions about this configuration 
>>> 1.  Is it safe to use a  mount point 2 times ?  
>>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
>>>  says "Ensure that no more than one brick is created from a single mount." 
>>> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
>>> 
>>> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2 
>>> additional data bricks plus 1 arbitrator brick (VOL2)  to create a  
>>> distributed-replicate cluster providing ~6TB of contiguous storage ?  . 
>>>      By contiguous storage I mean that df -h would show ~6 TB disk space.
>>> 
>>> Thank You For Your Help !!
>>> 
>>> On Mon, Sep 28, 2020 at 4:04 PM C Williams  wrote:
 Strahil,
 
 Thank You For Your Help !
 
 On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov  
 wrote:
> You can setup your bricks in such way , that each host has at least 1 
> brick.
> For example:
> Server A: VOL1 - Brick 1
> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
> Server D: VOL2 - brick 1
> 
> The most optimal is to find a small system/VM for being an arbiter and 
> having a 'replica 3 arbiter 1' volume.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme 
>  написа: 
> 
> 
> 
> 
> 
> It might be possible to do something similar as described in the 
> documentation here: 
> https://access.redhat.co

[ovirt-users] Re: Replica Question

2020-09-30 Thread C Williams
Hello,

I am planning to use (6) 1 TB disks in a hardware RAID 6 array which will
yield 4 data disks per oVirt Gluster brick.

However.
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
---  states

"For RAID 6, the stripe unit size must be chosen such that the full stripe
size (stripe unit * number of data disks) is between 1 MiB and 2 MiB,
preferably in the lower end of the range. Hardware RAID controllers usually
allow stripe unit sizes that are a power of 2. For RAID 6 with 12 disks (10
data disks), the recommended stripe unit size is 128KiB"

In my case,  each  oVirt Gluster Brick would have 4 data disks which would
mean a 256K hardware RAID stripe size ( by what I see above)  to get the
full stripe size unit to above 1 MiB  ( 4 data disks X 256K Hardware RAID
stripe = 1024 K )

Could I use a smaller stripe size ex. 128K -- since most of the oVirt
virtual disk traffic will be small sharded files and a 256K stripe would
seem to me to be pretty big.for this type of data ?

Thanks For Your Help !!





On Tue, Sep 29, 2020 at 8:22 PM C Williams  wrote:

> Strahil,
>
> Once Again Thank You For Your Help !
>
> On Tue, Sep 29, 2020 at 11:29 AM Strahil Nikolov 
> wrote:
>
>> One important step is to align the XFS to the stripe size * stripe width.
>> Don't miss it or you might have issues.
>>
>> Details can be found at:
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams <
>> cwilliams3...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hello,
>>
>> We have decided to get a 6th server for the install. I hope to set up a
>> 2x3 Distributed replica 3 .
>>
>> So we are not going to worry about the "5 server" situation.
>>
>> Thank You All For Your Help !!
>>
>> On Mon, Sep 28, 2020 at 5:53 PM C Williams 
>> wrote:
>> > Hello,
>> >
>> > More questions on this -- since I have 5 servers . Could the following
>> work ?   Each server has (1) 3TB RAID 6 partition that I want to use for
>> contiguous storage.
>> >
>> > Mountpoint for RAID 6 partition (3TB)  /brick
>> >
>> > Server A: VOL1 - Brick 1  directory
>>  /brick/brick1 (VOL1 Data brick)
>>
>> > Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory  /brick/brick2
>> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
>> > Server C: VOL1 - Brick 3 directory
>> /brick/brick3  (VOL1 Data brick)
>> > Server D: VOL2 - Brick 1 directory
>> /brick/brick1  (VOL2 Data brick)
>> > Server E  VOL2 - Brick 2 directory
>> /brick/brick2  (VOL2 Data brick)
>> >
>> > Questions about this configuration
>> > 1.  Is it safe to use a  mount point 2 times ?
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
>> says "Ensure that no more than one brick is created from a single mount."
>> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
>> >
>> > 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add
>> 2 additional data bricks plus 1 arbitrator brick (VOL2)  to create a
>>  distributed-replicate cluster providing ~6TB of contiguous storage ?  .
>> >  By contiguous storage I mean that df -h would show ~6 TB disk
>> space.
>> >
>> > Thank You For Your Help !!
>> >
>> > On Mon, Sep 28, 2020 at 4:04 PM C Williams 
>> wrote:
>> >> Strahil,
>> >>
>> >> Thank You For Your Help !
>> >>
>> >> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov 
>> wrote:
>> >>> You can setup your bricks in such way , that each host has at least 1
>> brick.
>> >>> For example:
>> >>> Server A: VOL1 - Brick 1
>> >>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
>> >>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
>> >>> Server D: VOL2 - brick 1
>> >>>
>> >>> The most optimal is to find a small system/VM for being an arbiter
>> and having a 'replica 3 arbiter 1' volume.
>> >>>
>> >>> Best Regards,
>> >>> Strahil Nikolov
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme <
>> jay...@gmail.com> написа:
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> It might be possible to do something similar as described in the
>> documentation here:
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
>> -- but I'm not sure if oVirt HCI would support it. You might have to roll
>> out your own GlusterFS storage solution. Someone with more Gluster/HCI
>> knowledge might know better.
>> >>>
>> >>> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
>> wrote:
>>  Jayme,
>> 
>>  Thank for getting back 

[ovirt-users] Re: Replica Question

2020-09-29 Thread C Williams
Strahil,

Once Again Thank You For Your Help !

On Tue, Sep 29, 2020 at 11:29 AM Strahil Nikolov 
wrote:

> One important step is to align the XFS to the stripe size * stripe width.
> Don't miss it or you might have issues.
>
> Details can be found at:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams <
> cwilliams3...@gmail.com> написа:
>
>
>
>
>
> Hello,
>
> We have decided to get a 6th server for the install. I hope to set up a
> 2x3 Distributed replica 3 .
>
> So we are not going to worry about the "5 server" situation.
>
> Thank You All For Your Help !!
>
> On Mon, Sep 28, 2020 at 5:53 PM C Williams 
> wrote:
> > Hello,
> >
> > More questions on this -- since I have 5 servers . Could the following
> work ?   Each server has (1) 3TB RAID 6 partition that I want to use for
> contiguous storage.
> >
> > Mountpoint for RAID 6 partition (3TB)  /brick
> >
> > Server A: VOL1 - Brick 1  directory
>  /brick/brick1 (VOL1 Data brick)
>
> > Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory  /brick/brick2
> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
> > Server C: VOL1 - Brick 3 directory
> /brick/brick3  (VOL1 Data brick)
> > Server D: VOL2 - Brick 1 directory
> /brick/brick1  (VOL2 Data brick)
> > Server E  VOL2 - Brick 2 directory
> /brick/brick2  (VOL2 Data brick)
> >
> > Questions about this configuration
> > 1.  Is it safe to use a  mount point 2 times ?
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
> says "Ensure that no more than one brick is created from a single mount."
> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
> >
> > 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add
> 2 additional data bricks plus 1 arbitrator brick (VOL2)  to create a
>  distributed-replicate cluster providing ~6TB of contiguous storage ?  .
> >  By contiguous storage I mean that df -h would show ~6 TB disk space.
> >
> > Thank You For Your Help !!
> >
> > On Mon, Sep 28, 2020 at 4:04 PM C Williams 
> wrote:
> >> Strahil,
> >>
> >> Thank You For Your Help !
> >>
> >> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov 
> wrote:
> >>> You can setup your bricks in such way , that each host has at least 1
> brick.
> >>> For example:
> >>> Server A: VOL1 - Brick 1
> >>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
> >>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
> >>> Server D: VOL2 - brick 1
> >>>
> >>> The most optimal is to find a small system/VM for being an arbiter and
> having a 'replica 3 arbiter 1' volume.
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme <
> jay...@gmail.com> написа:
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> It might be possible to do something similar as described in the
> documentation here:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
> -- but I'm not sure if oVirt HCI would support it. You might have to roll
> out your own GlusterFS storage solution. Someone with more Gluster/HCI
> knowledge might know better.
> >>>
> >>> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
> wrote:
>  Jayme,
> 
>  Thank for getting back with me !
> 
>  If I wanted to be wasteful with storage, could I start with an
> initial replica 2 + arbiter and then add 2 bricks to the volume ? Could the
> arbiter solve split-brains for 4 bricks ?
> 
>  Thank You For Your Help !
> 
>  On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
> > You can only do HCI in multiple's of 3. You could do a 3 server HCI
> setup and add the other two servers as compute nodes or you could add a 6th
> server and expand HCI across all 6
> >
> > On Mon, Sep 28, 2020 at 12:28 PM C Williams 
> wrote:
> >> Hello,
> >>
> >> We recently received 5 servers. All have about 3 TB of storage.
> >>
> >> I want to deploy an oVirt HCI using as much of my storage and
> compute resources as possible.
> >>
> >> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
> >>
> >> I have deployed replica 3s and know about replica 2 + arbiter --
> but an arbiter would not be applicable here -- since I have equal storage
> on all of the planned bricks.
> >>
> >> Thank You For Your Help !!
> >>
> >> C Williams
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-l

[ovirt-users] Re: Replica Question

2020-09-29 Thread C Williams
Strahil,

Thank You For Your Help !

On Tue, Sep 29, 2020 at 11:24 AM Strahil Nikolov 
wrote:

>
> More questions on this -- since I have 5 servers . Could the following
> work ?   Each server has (1) 3TB RAID 6 partition that I want to use for
> contiguous storage.
>
> Mountpoint for RAID 6 partition (3TB)  /brick
>
> Server A: VOL1 - Brick 1  directory
>  /brick/brick1 (VOL1 Data brick)
>
> Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory  /brick/brick2
> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
> Server C: VOL1 - Brick 3 directory
> /brick/brick3  (VOL1 Data brick)
> Server D: VOL2 - Brick 1 directory
> /brick/brick1  (VOL2 Data brick)
> Server E  VOL2 - Brick 2 directory
> /brick/brick2  (VOL2 Data brick)
>
> Questions about this configuration
> 1.  Is it safe to use a  mount point 2 times ?
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
> says "Ensure that no more than one brick is created from a single mount."
> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
>
> As long as you keep the arbiter brick (VOL 2) separate from the data brick
> (VOL 1) , it will be fine. A brick is a unique combination of Gluster TSP
> node + mount point. You can use /brick/brick1 on all your nodes and the
> volume will be fine. Using "/brick/brick1" for data brick (VOL1) and
> "/brick/brick1" for arbiter brick (VOL2) on the same host IS NOT
> ACCEPTABLE. So just keep the brick names more unique and everything will be
> fine. Maybe something like this will be easier to work with, but you can
> set it to anything you want as long as you don't use same brick in 2
> volumes:
> serverA:/gluster_bricks/VOL1/brick1
> serverB:/gluster_bricks/VOL1/brick2
> serverB:/gluster_bricks/VOL2/arbiter
> serverC:/gluster_bricks/VOL1/brick3
> serverD:/gluster_bricks/VOL2/brick1
> serverE:/gluster_bricks/VOL2/brick2
>
>
> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2
> additional data bricks plus 1 arbitrator brick (VOL2)  to create a
>  distributed-replicate cluster providing ~6TB of contiguous storage ?  .
>  By contiguous storage I mean that df -h would show ~6 TB disk space.
>
> No, you either use 6 data bricks (subvol1 -> 3 data disks, subvol2 -> 3
> data disks), or you use 4 data + 2 arbiter bricks (subvol 1 -> 2 data + 1
> arbiter, subvol 2 -> 2 data + 1 arbiter). The good thing is that you can
> reshape the volume once you have more disks.
>
>
> If you have only Linux VMs , you can follow point 1 and create 2 volumes
> which will be 2 storage domains in Ovirt. Then you can stripe (software
> raid 0 via mdadm or lvm native) your VMs with 1 disk from the first volume
> and 1 disk from the second volume.
>
> Actually , I'm using 4 gluster volumes for my NVMe as my network is too
> slow. My VMs have 4 disks in a raid0 (boot) and striped LV (for "/").
>
> Best Regards,
> Strahil Nikolov
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CQAOSAG3VMKIAM62UWDA67X3DOVNI6K3/


[ovirt-users] Re: Replica Question

2020-09-29 Thread Strahil Nikolov via Users
One important step is to align the XFS to the stripe size * stripe width. Don't 
miss it or you might have issues.

Details can be found at: 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#sect-Disk_Configuration

Best Regards,
Strahil Nikolov






В вторник, 29 септември 2020 г., 16:36:10 Гринуич+3, C Williams 
 написа: 





Hello,

We have decided to get a 6th server for the install. I hope to set up a 2x3 
Distributed replica 3 .

So we are not going to worry about the "5 server" situation.

Thank You All For Your Help !!

On Mon, Sep 28, 2020 at 5:53 PM C Williams  wrote:
> Hello,
> 
> More questions on this -- since I have 5 servers . Could the following work ? 
>   Each server has (1) 3TB RAID 6 partition that I want to use for contiguous 
> storage.
> 
> Mountpoint for RAID 6 partition (3TB)  /brick  
> 
> Server A: VOL1 - Brick 1                                  directory  
> /brick/brick1 (VOL1 Data brick)                                               
>                                                       
> Server B: VOL1 - Brick 2 + VOL2 - Brick 3      directory  /brick/brick2 (VOL1 
> Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
> Server C: VOL1 - Brick 3                                 directory 
> /brick/brick3  (VOL1 Data brick)  
> Server D: VOL2 - Brick 1                                 directory 
> /brick/brick1  (VOL2 Data brick)
> Server E  VOL2 - Brick 2                                 directory 
> /brick/brick2  (VOL2 Data brick)
> 
> Questions about this configuration 
> 1.  Is it safe to use a  mount point 2 times ?  
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
>  says "Ensure that no more than one brick is created from a single mount." In 
> my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
> 
> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2 
> additional data bricks plus 1 arbitrator brick (VOL2)  to create a  
> distributed-replicate cluster providing ~6TB of contiguous storage ?  . 
>      By contiguous storage I mean that df -h would show ~6 TB disk space.
> 
> Thank You For Your Help !!
> 
> On Mon, Sep 28, 2020 at 4:04 PM C Williams  wrote:
>> Strahil,
>> 
>> Thank You For Your Help !
>> 
>> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov  
>> wrote:
>>> You can setup your bricks in such way , that each host has at least 1 brick.
>>> For example:
>>> Server A: VOL1 - Brick 1
>>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
>>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
>>> Server D: VOL2 - brick 1
>>> 
>>> The most optimal is to find a small system/VM for being an arbiter and 
>>> having a 'replica 3 arbiter 1' volume.
>>> 
>>> Best Regards,
>>> Strahil Nikolov
>>> 
>>> 
>>> 
>>> 
>>> 
>>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme 
>>>  написа: 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> It might be possible to do something similar as described in the 
>>> documentation here: 
>>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
>>>  -- but I'm not sure if oVirt HCI would support it. You might have to roll 
>>> out your own GlusterFS storage solution. Someone with more Gluster/HCI 
>>> knowledge might know better.
>>> 
>>> On Mon, Sep 28, 2020 at 1:26 PM C Williams  wrote:
 Jayme,
 
 Thank for getting back with me ! 
 
 If I wanted to be wasteful with storage, could I start with an initial 
 replica 2 + arbiter and then add 2 bricks to the volume ? Could the 
 arbiter solve split-brains for 4 bricks ?
 
 Thank You For Your Help !
 
 On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
> You can only do HCI in multiple's of 3. You could do a 3 server HCI setup 
> and add the other two servers as compute nodes or you could add a 6th 
> server and expand HCI across all 6
> 
> On Mon, Sep 28, 2020 at 12:28 PM C Williams  
> wrote:
>> Hello,
>> 
>> We recently received 5 servers. All have about 3 TB of storage. 
>> 
>> I want to deploy an oVirt HCI using as much of my storage and compute 
>> resources as possible. 
>> 
>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>> 
>> I have deployed replica 3s and know about replica 2 + arbiter -- but an 
>> arbiter would not be applicable here -- since I have equal storage on 
>> all of the planned bricks.
>> 
>> Thank You For Your Help !!
>> 
>> C Williams
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ov

[ovirt-users] Re: Replica Question

2020-09-29 Thread Strahil Nikolov via Users

More questions on this -- since I have 5 servers . Could the following work ?   
Each server has (1) 3TB RAID 6 partition that I want to use for contiguous 
storage.

Mountpoint for RAID 6 partition (3TB)  /brick  

Server A: VOL1 - Brick 1                                  directory  
/brick/brick1 (VOL1 Data brick)                                                 
                                                    
Server B: VOL1 - Brick 2 + VOL2 - Brick 3      directory  /brick/brick2 (VOL1 
Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
Server C: VOL1 - Brick 3                                 directory 
/brick/brick3  (VOL1 Data brick)  
Server D: VOL2 - Brick 1                                 directory 
/brick/brick1  (VOL2 Data brick)
Server E  VOL2 - Brick 2                                 directory 
/brick/brick2  (VOL2 Data brick)

Questions about this configuration 
1.  Is it safe to use a  mount point 2 times ?  
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
 says "Ensure that no more than one brick is created from a single mount." In 
my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B

As long as you keep the arbiter brick (VOL 2) separate from the data brick (VOL 
1) , it will be fine. A brick is a unique combination of Gluster TSP node + 
mount point. You can use /brick/brick1 on all your nodes and the volume will be 
fine. Using "/brick/brick1" for data brick (VOL1) and "/brick/brick1" for 
arbiter brick (VOL2) on the same host IS NOT ACCEPTABLE. So just keep the brick 
names more unique and everything will be fine. Maybe something like this will 
be easier to work with, but you can set it to anything you want as long as you 
don't use same brick in 2 volumes:
serverA:/gluster_bricks/VOL1/brick1
serverB:/gluster_bricks/VOL1/brick2
serverB:/gluster_bricks/VOL2/arbiter
serverC:/gluster_bricks/VOL1/brick3
serverD:/gluster_bricks/VOL2/brick1
serverE:/gluster_bricks/VOL2/brick2


2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2 
additional data bricks plus 1 arbitrator brick (VOL2)  to create a  
distributed-replicate cluster providing ~6TB of contiguous storage ?  . 
     By contiguous storage I mean that df -h would show ~6 TB disk space.

No, you either use 6 data bricks (subvol1 -> 3 data disks, subvol2 -> 3 data 
disks), or you use 4 data + 2 arbiter bricks (subvol 1 -> 2 data + 1 arbiter, 
subvol 2 -> 2 data + 1 arbiter). The good thing is that you can reshape the 
volume once you have more disks.


If you have only Linux VMs , you can follow point 1 and create 2 volumes which 
will be 2 storage domains in Ovirt. Then you can stripe (software raid 0 via 
mdadm or lvm native) your VMs with 1 disk from the first volume and 1 disk from 
the second volume.

Actually , I'm using 4 gluster volumes for my NVMe as my network is too slow. 
My VMs have 4 disks in a raid0 (boot) and striped LV (for "/").

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G5YGKU6IK2OBUETNH2AL2MFOGKWUKFBO/


[ovirt-users] Re: Replica Question

2020-09-29 Thread C Williams
Hello,

We have decided to get a 6th server for the install. I hope to set up a 2x3
Distributed replica 3 .

So we are not going to worry about the "5 server" situation.

Thank You All For Your Help !!

On Mon, Sep 28, 2020 at 5:53 PM C Williams  wrote:

> Hello,
>
> More questions on this -- since I have 5 servers . Could the following
> work ?   Each server has (1) 3TB RAID 6 partition that I want to use for
> contiguous storage.
>
> Mountpoint for RAID 6 partition (3TB)  /brick
>
> Server A: VOL1 - Brick 1  directory
>  /brick/brick1 (VOL1 Data brick)
>
> Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory  /brick/brick2
> (VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
> Server C: VOL1 - Brick 3 directory
> /brick/brick3  (VOL1 Data brick)
> Server D: VOL2 - Brick 1 directory
> /brick/brick1  (VOL2 Data brick)
> Server E  VOL2 - Brick 2 directory
> /brick/brick2  (VOL2 Data brick)
>
> Questions about this configuration
> 1.  Is it safe to use a  mount point 2 times ?
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
> says "Ensure that no more than one brick is created from a single mount."
> In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B
>
> 2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2
> additional data bricks plus 1 arbitrator brick (VOL2)  to create a
>  distributed-replicate cluster providing ~6TB of contiguous storage ?  .
>  By contiguous storage I mean that df -h would show ~6 TB disk space.
>
> Thank You For Your Help !!
>
> On Mon, Sep 28, 2020 at 4:04 PM C Williams 
> wrote:
>
>> Strahil,
>>
>> Thank You For Your Help !
>>
>> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov 
>> wrote:
>>
>>> You can setup your bricks in such way , that each host has at least 1
>>> brick.
>>> For example:
>>> Server A: VOL1 - Brick 1
>>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
>>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
>>> Server D: VOL2 - brick 1
>>>
>>> The most optimal is to find a small system/VM for being an arbiter and
>>> having a 'replica 3 arbiter 1' volume.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme <
>>> jay...@gmail.com> написа:
>>>
>>>
>>>
>>>
>>>
>>> It might be possible to do something similar as described in the
>>> documentation here:
>>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
>>> -- but I'm not sure if oVirt HCI would support it. You might have to roll
>>> out your own GlusterFS storage solution. Someone with more Gluster/HCI
>>> knowledge might know better.
>>>
>>> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
>>> wrote:
>>> > Jayme,
>>> >
>>> > Thank for getting back with me !
>>> >
>>> > If I wanted to be wasteful with storage, could I start with an initial
>>> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
>>> solve split-brains for 4 bricks ?
>>> >
>>> > Thank You For Your Help !
>>> >
>>> > On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>>> >> You can only do HCI in multiple's of 3. You could do a 3 server HCI
>>> setup and add the other two servers as compute nodes or you could add a 6th
>>> server and expand HCI across all 6
>>> >>
>>> >> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
>>> wrote:
>>> >>> Hello,
>>> >>>
>>> >>> We recently received 5 servers. All have about 3 TB of storage.
>>> >>>
>>> >>> I want to deploy an oVirt HCI using as much of my storage and
>>> compute resources as possible.
>>> >>>
>>> >>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>> >>>
>>> >>> I have deployed replica 3s and know about replica 2 + arbiter -- but
>>> an arbiter would not be applicable here -- since I have equal storage on
>>> all of the planned bricks.
>>> >>>
>>> >>> Thank You For Your Help !!
>>> >>>
>>> >>> C Williams
>>> >>> ___
>>> >>> Users mailing list -- users@ovirt.org
>>> >>> To unsubscribe send an email to users-le...@ovirt.org
>>> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> >>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> >>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>>> >>>
>>> >>
>>> >
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https:

[ovirt-users] Re: Replica Question

2020-09-28 Thread C Williams
Hello,

More questions on this -- since I have 5 servers . Could the following work
?   Each server has (1) 3TB RAID 6 partition that I want to use for
contiguous storage.

Mountpoint for RAID 6 partition (3TB)  /brick

Server A: VOL1 - Brick 1  directory
 /brick/brick1 (VOL1 Data brick)

Server B: VOL1 - Brick 2 + VOL2 - Brick 3  directory  /brick/brick2
(VOL1 Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
Server C: VOL1 - Brick 3 directory
/brick/brick3  (VOL1 Data brick)
Server D: VOL2 - Brick 1 directory
/brick/brick1  (VOL2 Data brick)
Server E  VOL2 - Brick 2 directory
/brick/brick2  (VOL2 Data brick)

Questions about this configuration
1.  Is it safe to use a  mount point 2 times ?
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/formatting_and_mounting_bricks#idm139907083110432
says "Ensure that no more than one brick is created from a single mount."
In my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B

2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2
additional data bricks plus 1 arbitrator brick (VOL2)  to create a
 distributed-replicate cluster providing ~6TB of contiguous storage ?  .
 By contiguous storage I mean that df -h would show ~6 TB disk space.

Thank You For Your Help !!

On Mon, Sep 28, 2020 at 4:04 PM C Williams  wrote:

> Strahil,
>
> Thank You For Your Help !
>
> On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov 
> wrote:
>
>> You can setup your bricks in such way , that each host has at least 1
>> brick.
>> For example:
>> Server A: VOL1 - Brick 1
>> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
>> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
>> Server D: VOL2 - brick 1
>>
>> The most optimal is to find a small system/VM for being an arbiter and
>> having a 'replica 3 arbiter 1' volume.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme <
>> jay...@gmail.com> написа:
>>
>>
>>
>>
>>
>> It might be possible to do something similar as described in the
>> documentation here:
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
>> -- but I'm not sure if oVirt HCI would support it. You might have to roll
>> out your own GlusterFS storage solution. Someone with more Gluster/HCI
>> knowledge might know better.
>>
>> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
>> wrote:
>> > Jayme,
>> >
>> > Thank for getting back with me !
>> >
>> > If I wanted to be wasteful with storage, could I start with an initial
>> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
>> solve split-brains for 4 bricks ?
>> >
>> > Thank You For Your Help !
>> >
>> > On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>> >> You can only do HCI in multiple's of 3. You could do a 3 server HCI
>> setup and add the other two servers as compute nodes or you could add a 6th
>> server and expand HCI across all 6
>> >>
>> >> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
>> wrote:
>> >>> Hello,
>> >>>
>> >>> We recently received 5 servers. All have about 3 TB of storage.
>> >>>
>> >>> I want to deploy an oVirt HCI using as much of my storage and compute
>> resources as possible.
>> >>>
>> >>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>> >>>
>> >>> I have deployed replica 3s and know about replica 2 + arbiter -- but
>> an arbiter would not be applicable here -- since I have equal storage on
>> all of the planned bricks.
>> >>>
>> >>> Thank You For Your Help !!
>> >>>
>> >>> C Williams
>> >>> ___
>> >>> Users mailing list -- users@ovirt.org
>> >>> To unsubscribe send an email to users-le...@ovirt.org
>> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> >>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>> >>>
>> >>
>> >
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CECMD2SWBSBDAFP3TFMMYWTSV3UKU72E/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.o

[ovirt-users] Re: Replica Question

2020-09-28 Thread C Williams
Strahil,

Thank You For Your Help !

On Mon, Sep 28, 2020 at 4:01 PM Strahil Nikolov 
wrote:

> You can setup your bricks in such way , that each host has at least 1
> brick.
> For example:
> Server A: VOL1 - Brick 1
> Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
> Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
> Server D: VOL2 - brick 1
>
> The most optimal is to find a small system/VM for being an arbiter and
> having a 'replica 3 arbiter 1' volume.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
> В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme <
> jay...@gmail.com> написа:
>
>
>
>
>
> It might be possible to do something similar as described in the
> documentation here:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
> -- but I'm not sure if oVirt HCI would support it. You might have to roll
> out your own GlusterFS storage solution. Someone with more Gluster/HCI
> knowledge might know better.
>
> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
> wrote:
> > Jayme,
> >
> > Thank for getting back with me !
> >
> > If I wanted to be wasteful with storage, could I start with an initial
> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
> solve split-brains for 4 bricks ?
> >
> > Thank You For Your Help !
> >
> > On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
> >> You can only do HCI in multiple's of 3. You could do a 3 server HCI
> setup and add the other two servers as compute nodes or you could add a 6th
> server and expand HCI across all 6
> >>
> >> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
> wrote:
> >>> Hello,
> >>>
> >>> We recently received 5 servers. All have about 3 TB of storage.
> >>>
> >>> I want to deploy an oVirt HCI using as much of my storage and compute
> resources as possible.
> >>>
> >>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
> >>>
> >>> I have deployed replica 3s and know about replica 2 + arbiter -- but
> an arbiter would not be applicable here -- since I have equal storage on
> all of the planned bricks.
> >>>
> >>> Thank You For Your Help !!
> >>>
> >>> C Williams
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
> >>>
> >>
> >
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CECMD2SWBSBDAFP3TFMMYWTSV3UKU72E/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DYWB3JGTNVWDPIDXUFXDSXI6JIOGNJOU/


[ovirt-users] Re: Replica Question

2020-09-28 Thread Strahil Nikolov via Users
You can setup your bricks in such way , that each host has at least 1 brick.
For example:
Server A: VOL1 - Brick 1
Server B: VOL1 - Brick 2 + VOL 3 - Brick 3
Server C: VOL1 - Brick 3 + VOL 2 - Brick 2
Server D: VOL2 - brick 1

The most optimal is to find a small system/VM for being an arbiter and having a 
'replica 3 arbiter 1' volume.

Best Regards,
Strahil Nikolov





В понеделник, 28 септември 2020 г., 20:46:16 Гринуич+3, Jayme 
 написа: 





It might be possible to do something similar as described in the documentation 
here: 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
 -- but I'm not sure if oVirt HCI would support it. You might have to roll out 
your own GlusterFS storage solution. Someone with more Gluster/HCI knowledge 
might know better.

On Mon, Sep 28, 2020 at 1:26 PM C Williams  wrote:
> Jayme,
> 
> Thank for getting back with me ! 
> 
> If I wanted to be wasteful with storage, could I start with an initial 
> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter 
> solve split-brains for 4 bricks ?
> 
> Thank You For Your Help !
> 
> On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>> You can only do HCI in multiple's of 3. You could do a 3 server HCI setup 
>> and add the other two servers as compute nodes or you could add a 6th server 
>> and expand HCI across all 6
>> 
>> On Mon, Sep 28, 2020 at 12:28 PM C Williams  wrote:
>>> Hello,
>>> 
>>> We recently received 5 servers. All have about 3 TB of storage. 
>>> 
>>> I want to deploy an oVirt HCI using as much of my storage and compute 
>>> resources as possible. 
>>> 
>>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>> 
>>> I have deployed replica 3s and know about replica 2 + arbiter -- but an 
>>> arbiter would not be applicable here -- since I have equal storage on all 
>>> of the planned bricks.
>>> 
>>> Thank You For Your Help !!
>>> 
>>> C Williams
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>>> 
>> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CECMD2SWBSBDAFP3TFMMYWTSV3UKU72E/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DF4P6PZGSAKQZRRWSHBVPZXNJQDR6YGC/


[ovirt-users] Re: Replica Question

2020-09-28 Thread C Williams
Thanks Jayme !

I'll read up on it.

Thanks Again For Your Help !!

On Mon, Sep 28, 2020 at 1:42 PM Jayme  wrote:

> It might be possible to do something similar as described in the
> documentation here:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
> -- but I'm not sure if oVirt HCI would support it. You might have to roll
> out your own GlusterFS storage solution. Someone with more Gluster/HCI
> knowledge might know better.
>
> On Mon, Sep 28, 2020 at 1:26 PM C Williams 
> wrote:
>
>> Jayme,
>>
>> Thank for getting back with me !
>>
>> If I wanted to be wasteful with storage, could I start with an initial
>> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
>> solve split-brains for 4 bricks ?
>>
>> Thank You For Your Help !
>>
>> On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>>
>>> You can only do HCI in multiple's of 3. You could do a 3 server HCI
>>> setup and add the other two servers as compute nodes or you could add a 6th
>>> server and expand HCI across all 6
>>>
>>> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
>>> wrote:
>>>
 Hello,

 We recently received 5 servers. All have about 3 TB of storage.

 I want to deploy an oVirt HCI using as much of my storage and compute
 resources as possible.

 Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?

 I have deployed replica 3s and know about replica 2 + arbiter -- but an
 arbiter would not be applicable here -- since I have equal storage on all
 of the planned bricks.

 Thank You For Your Help !!

 C Williams
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/

>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GLA7LG35GTDPFSPFFEC6YYSSQNC4NPHM/


[ovirt-users] Re: Replica Question

2020-09-28 Thread Jayme
It might be possible to do something similar as described in the
documentation here:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/creating_arbitrated_replicated_volumes#sect-Create_Multiple_Arbitrated_Volumes
-- but I'm not sure if oVirt HCI would support it. You might have to roll
out your own GlusterFS storage solution. Someone with more Gluster/HCI
knowledge might know better.

On Mon, Sep 28, 2020 at 1:26 PM C Williams  wrote:

> Jayme,
>
> Thank for getting back with me !
>
> If I wanted to be wasteful with storage, could I start with an initial
> replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
> solve split-brains for 4 bricks ?
>
> Thank You For Your Help !
>
> On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:
>
>> You can only do HCI in multiple's of 3. You could do a 3 server HCI setup
>> and add the other two servers as compute nodes or you could add a 6th
>> server and expand HCI across all 6
>>
>> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
>> wrote:
>>
>>> Hello,
>>>
>>> We recently received 5 servers. All have about 3 TB of storage.
>>>
>>> I want to deploy an oVirt HCI using as much of my storage and compute
>>> resources as possible.
>>>
>>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>>
>>> I have deployed replica 3s and know about replica 2 + arbiter -- but an
>>> arbiter would not be applicable here -- since I have equal storage on all
>>> of the planned bricks.
>>>
>>> Thank You For Your Help !!
>>>
>>> C Williams
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CECMD2SWBSBDAFP3TFMMYWTSV3UKU72E/


[ovirt-users] Re: Replica Question

2020-09-28 Thread C Williams
Jayme,

Thank for getting back with me !

If I wanted to be wasteful with storage, could I start with an initial
replica 2 + arbiter and then add 2 bricks to the volume ? Could the arbiter
solve split-brains for 4 bricks ?

Thank You For Your Help !

On Mon, Sep 28, 2020 at 12:05 PM Jayme  wrote:

> You can only do HCI in multiple's of 3. You could do a 3 server HCI setup
> and add the other two servers as compute nodes or you could add a 6th
> server and expand HCI across all 6
>
> On Mon, Sep 28, 2020 at 12:28 PM C Williams 
> wrote:
>
>> Hello,
>>
>> We recently received 5 servers. All have about 3 TB of storage.
>>
>> I want to deploy an oVirt HCI using as much of my storage and compute
>> resources as possible.
>>
>> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>>
>> I have deployed replica 3s and know about replica 2 + arbiter -- but an
>> arbiter would not be applicable here -- since I have equal storage on all
>> of the planned bricks.
>>
>> Thank You For Your Help !!
>>
>> C Williams
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/57PVX55WE6M6W2LAXU4KNJ5YIJQUAKVS/


[ovirt-users] Re: Replica Question

2020-09-28 Thread Jayme
You can only do HCI in multiple's of 3. You could do a 3 server HCI setup
and add the other two servers as compute nodes or you could add a 6th
server and expand HCI across all 6

On Mon, Sep 28, 2020 at 12:28 PM C Williams  wrote:

> Hello,
>
> We recently received 5 servers. All have about 3 TB of storage.
>
> I want to deploy an oVirt HCI using as much of my storage and compute
> resources as possible.
>
> Would oVirt support a "replica 5" HCI (Compute/Gluster) cluster ?
>
> I have deployed replica 3s and know about replica 2 + arbiter -- but an
> arbiter would not be applicable here -- since I have equal storage on all
> of the planned bricks.
>
> Thank You For Your Help !!
>
> C Williams
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXTNUFZ3BASW2CWX7CWTMQS22324J437/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E5OIW7L4G6R5YMABQQ67JSS2BHB73QJT/