Hi Jakub,

I assume the solution you are searching for is something GlusterFS can serve 
for example:
http://www.gluster.org/about/

Best, Sven.




-----Ursprüngliche Nachricht-----
Von: [email protected] [mailto:[email protected]] Im Auftrag von 
Jonathan Horne
Gesendet: Mittwoch, 1. Mai 2013 17:16
An: Jakub Bittner; [email protected]
Betreff: Re: [Users] Fault tolerant storage

I would be surprised if you could do that without your users noticing.

It seems like you would be better served to have each storage domain on a RAID 
array and rely on the hardware tools of the RAID to handle disk failure 
tolerance.

Cheers,
jonathan

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of 
Jakub Bittner
Sent: Tuesday, April 30, 2013 9:19 AM
To: [email protected]
Subject: [Users] Fault tolerant storage

Hi,

would it be possible in near future to use for example 2 attached DATA storage 
(iSCSi, or so) as fault tolerant storage?

I mean I have two data storage connected to data center and they include 
exactly same data and my VMs runs from one and all changes are mirrored to 
second like RAID 1 and if one storage fails everything switch to second storage 
without user notice?
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users

________________________________
This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to