More questions on this -- since I have 5 servers . Could the following work ?   
Each server has (1) 3TB RAID 6 partition that I want to use for contiguous 

Mountpoint for RAID 6 partition (3TB)  /brick  

Server A: VOL1 - Brick 1                                  directory  
/brick/brick1 (VOL1 Data brick)                                                 
Server B: VOL1 - Brick 2 + VOL2 - Brick 3      directory  /brick/brick2 (VOL1 
Data brick)   /brick/brick3 (VOL2 Arbitrator brick)
Server C: VOL1 - Brick 3                                 directory 
/brick/brick3  (VOL1 Data brick)  
Server D: VOL2 - Brick 1                                 directory 
/brick/brick1  (VOL2 Data brick)
Server E  VOL2 - Brick 2                                 directory 
/brick/brick2  (VOL2 Data brick)

Questions about this configuration 
1.  Is it safe to use a  mount point 2 times ?
 says "Ensure that no more than one brick is created from a single mount." In 
my example I have VOL1 and VOL2 sharing the mountpoint /brick on Server B

As long as you keep the arbiter brick (VOL 2) separate from the data brick (VOL 
1) , it will be fine. A brick is a unique combination of Gluster TSP node + 
mount point. You can use /brick/brick1 on all your nodes and the volume will be 
fine. Using "/brick/brick1" for data brick (VOL1) and "/brick/brick1" for 
arbiter brick (VOL2) on the same host IS NOT ACCEPTABLE. So just keep the brick 
names more unique and everything will be fine. Maybe something like this will 
be easier to work with, but you can set it to anything you want as long as you 
don't use same brick in 2 volumes:

2.  Could I start a standard replica3 (VOL1) with 3 data bricks and add 2 
additional data bricks plus 1 arbitrator brick (VOL2)  to create a  
distributed-replicate cluster providing ~6TB of contiguous storage ?  . 
     By contiguous storage I mean that df -h would show ~6 TB disk space.

No, you either use 6 data bricks (subvol1 -> 3 data disks, subvol2 -> 3 data 
disks), or you use 4 data + 2 arbiter bricks (subvol 1 -> 2 data + 1 arbiter, 
subvol 2 -> 2 data + 1 arbiter). The good thing is that you can reshape the 
volume once you have more disks.

If you have only Linux VMs , you can follow point 1 and create 2 volumes which 
will be 2 storage domains in Ovirt. Then you can stripe (software raid 0 via 
mdadm or lvm native) your VMs with 1 disk from the first volume and 1 disk from 
the second volume.

Actually , I'm using 4 gluster volumes for my NVMe as my network is too slow. 
My VMs have 4 disks in a raid0 (boot) and striped LV (for "/").

Best Regards,
Strahil Nikolov
Users mailing list --
To unsubscribe send an email to
Privacy Statement:
oVirt Code of Conduct:
List Archives:

Reply via email to