Hi Ivan, Thank you for your quick reply.
I'll have a look on Ceph and related perfs. As you mentionned, 2 DRDB nfs servers can do the job, but if I can avoid using 2 blades for just passing blocks to nfs, this is even better (and maintain them as well). Thanks for pointing to ceph. Grégoire --- Grégoire Lamodière T/ + 33 6 76 27 03 31 F/ + 33 1 75 43 89 71 -----Message d'origine----- De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com] Envoyé : dimanche 7 janvier 2018 15:20 À : users@cloudstack.apache.org Objet : Re: KVM storage cluster Hi, Grégoire, You could have - local storage if you like, so every compute node could have own space (one lun per host) - to have Ceph deployed on the same compute nodes (distribute raw devices among nodes) - to dedicate certain node as NFS server (or two servers with DRBD) I don't think that shared FS is a good option, even clustered LVM is a big pain. 2018-01-07 21:08 GMT+07:00 Grégoire Lamodière <g.lamodi...@dimsi.fr>: > Dear all, > > Since Citrix changed deeply the free version of XenServer 7.3, I am in > the process of Pocing moving our Xen clusters to KVM on Centos 7 I > decided to use HP blades connected to HP P2000 over mutipath SAS links. > > The network part seems fine to me, not so far from what we used to do > with Xen. > About the storage, I am a little but confused about the shared > mountpoint storage option offerds by CS. > > What would be the good option, in terms of CS, to create a cluster fs > using my SAS array ? > I read somewhere (a Dag SlideShare I think) that GFS2 is the only > clustered FS supported by CS. Is it still correct ? > Does it mean I have to create the GFS2 cluster, make identical mount > conf on all host, and use it on CS as NFS ? > I do not have to add the storage to KVM prior CS zone creation ? > > Thanks a lot for any help / information. > > --- > Grégoire Lamodière > T/ + 33 6 76 27 03 31 > F/ + 33 1 75 43 89 71 > > -- With best regards, Ivan Kudryavtsev Bitworks Software, Ltd. Cell: +7-923-414-1515 WWW: http://bitworks.software/ <http://bw-sw.com/>