Red Hat Gluster Storage is discontinued, but the Gluster (upstream) is pretty 
active and as Sandro Bonazzola (RH) confirmed -> there are no plans to remove 
support for Gluster.I think it's still a good choice, especially if you don't 
have SAN/ Higly-Available NFS.
Also, storage migration is transparent for the VMs, so you can add SAN on a 
later stage and move all VMs from Gluster to SAN without disruption* .
Keep in mind that Gluster is a tier2 storage and if you really need a lot of 
IOPS, CEPH might be suitable.

Best Regards,Strahil Nikolov
*: Note that this is valid if the FUSE client is used. Other oVirt users report 
huge performance increase when using libgfapi interface, but it has drawbacks 
like storage migration can happen only when you switch off libgfapi, power off 
the VM (on a scheduled basis), power on the VM, live migrate the VM to other 
storage type, enable libgfapi again for the rest of the VMs.

 
 
      
Thanks to Nikolov and Strahil for the valuable input! I was off for a few 
weeks, so I would like to apologize if I'm potentially reviving a zombie thread.
 
I am a bit confused about where to go with this environment after the 
discontinuation of the hyperconverged setup. What alternative options are there 
for us? Or do you think going the Gluster way would still be advisable, even 
though it seems as it is being discontinued over time?
 
Thanks for any input on this!
 
Best regards,
Jonas

  On 1/22/22 14:31, Strahil Nikolov via Users wrote: 
  
 Using the wizzard is utilizing the Gluster Andible roles.  I would highly 
recommend using it, unless you know what you are doing (for example storage 
alignment when using Hardware raid).   
   Keep in mind that the DHT xlator (the logic in distributed volumes) is shard 
aware, so your shards are spread between subvolumes and additional performance 
can be gained.So using replicated-distributed volumes have their benefits.   
   If you decide to avoid the software raid, use only replica3 volumes as with 
SSDs/NVMEs usually the failures are not physical, but logical (maximum writes 
reached -> predictive failure -> total failure).   
   Also, consider mounting via noatime/relatime and 
context="system_u:object_r:glusterd_brick_t:s0" for your gluster bricks.   
   Best Regards,   Strahil Nikolov 
 
 
   On Fri, Jan 21, 2022 at 11:00, Gilboa Davara   <[email protected]> wrote:    
 _______________________________________________ 
Users mailing list -- [email protected] 
To unsubscribe send an email to [email protected] 
Privacy Statement: https://www.ovirt.org/privacy-policy.html 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/U2ZEWLRF5D6FENQEI5QXL77CMWB7XF32/
  
   
  
 _______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/26AHNDSOJSIVTGYOEUFOY444YYBZCAIW/
  
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/RJ2NGOZ5JOREETFMY72M6JMYOVXHTFDS/

Reply via email to