possible unrecoverable gluster bugs" is a sweeping statement. Do you have any particular issue that you can refer us to?
No, I don't have experienced any issue, but if under heavy loads a new one appears, In this environment I could leave 1000 vdi out of service (or 1000 people without their workplace. Once clarified all questions, with oVirt, is it possible to achieve this architecture (or similar) ? Do you have any customer who has run out a gluster environment for heavy load vdi ? Thanks a lot. 2016-11-23 9:01 GMT+01:00 Sahina Bose <sab...@redhat.com>: > > > On Wed, Nov 23, 2016 at 1:18 PM, Oscar Segarra <oscar.sega...@gmail.com> > wrote: > >> Hi, >>> >>> As on oVirt is it possible to attach local storage I supose it can be >>> used to run virtual machines: >>> >>> I have drawn a couple of diagrams in order to know if is it possible to >>> set up this configuration: >>> >>> 1.- In on-going scenario: >>> Every host runs 100 vdi virtual machines whose disks are placed on local >>> storage. There is a common gluster volume shared between all nodes. >>> >>> [image: Imágenes integradas 1] >>> >> >> With local storage you end up losing many of the benefits of shared >> storage - including migration and HA. >> If you do have SSD on your physical hosts, have you considered building >> gluster volume using these? This could give you improved performance. >> Regarding performance, I think it is best that you run a test comparing >> gluster storage performance with local storage and see if this is >> acceptable to you. Please share the results in case you do. >> >> Yes, but I want to avoid possible corruption problems due to possible >> unrecoverable gluster bugs. >> We have to make some developement and I don't want to spend money in this >> process and then discover that the performance is not good enought and have >> to do a >> > > "possible unrecoverable gluster bugs" is a sweeping statement. Do you have > any particular issue that you can refer us to? > > >> >> >> In the above diagram each host is in its own cluster - as all hosts in a >> cluster should have access to the storage domain? >> >> Yes, every host has ho have access to two storage domains: The local one >> and the shared gluster one. >> >> Is the gluster volume for backup served from a separate set of server? >> >> No, each host will have 2 disks /dev/sdb1 (for runing vm on local >> storage) and /dev/sdc1 (for shared gluster where store backups) >> >> >>> >>> 2.- If one node fails: >>> >>> [image: Imágenes integradas 2] >>> >>> oVirt has to be able to inventory the copy of machines (in our example >>> vdi201 ... vdi300) and start them on remaining nodes. >>> >>> ¿Is it possible to reach this configuration with oVirt? ¿or something >>> similar? >>> >> >> This is the use case for gluster volume shared storage - where volume is >> a replica 3. If any host goes down, the data is available on the remaining >> 2 nodes, and the VMs can be migrated to other nodes. >> >> Yes, I know, but I'm already worried about corruption issues due to >> possible gluster bugs or performance problems under heavy load. >> >> I don't think what you ask for is possible automatically. If you want >> local storage to gluster volume backup, you would need 1-1 mapping. i.e >> each local storage domain has its own gluster volume backup.You could then >> import the storage domain that's backed up on the gluster volume and start >> the VMs on the remaining hosts. >> >> I don't want local storage for backup, I prefer gluster shared storage >> for backup. >> >> >>> Making backup with the import-export procedure based on snapshot can >>> take lot of time and resources. Incremental rsync is cheaper in terms of >>> resources. >>> >> >> Geo-replication based backup internally uses rsync, it also takes into >> account that VM images are consistent on disk before being synced. It >> however works as a backup option between two gluster volumes. >> >> Do you know if is it possible to have multiple masters geo-replicating >> against a single slave? >> > > No it is not possible. A master can have multiple slaves not the other way > around. > > >> >> Thanks a lot. >> > >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users