a bit late, but:

- for any IO heavy (medium even...) workload, try to avoid CEPH, no
offence, simply it takes lot of $$$ to make CEPH perform in random IO
worlds (imagine RHEL and vendors provide only refernce architecutre with
SEQUNATIAL benchmark workload, not random) - not to mention a huge list of
bugs we hit back in the days (simply, one/single great guy handled the CEPH
integration for CloudStack, but otherwise not lot of help from other
committers, if not mistaken, afaik...)
- NFS better performance but not magic... (but most well supported, code
wise, bug-less wise :)
- and for top notch (cost some $$$) SolidFire is the way to go (we have
tons of IO heavy customers, so this THE solution really, after living with
CEPH, then NFS on SSDs, etc) and provides guarantied IOPS etc...

Cheers.

On 7 January 2018 at 22:46, Grégoire Lamodière <g.lamodi...@dimsi.fr> wrote:

> Hi Vahric,
>
> Thank you. I will have a look on it.
>
> Grégoire
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
> -------- Message d'origine --------
> De : Vahric MUHTARYAN <vah...@doruk.net.tr>
> Date : 07/01/2018 21:08 (GMT+01:00)
> À : users@cloudstack.apache.org
> Objet : Re: KVM storage cluster
>
> Hello Grégoire,
>
> I suggest you to look EMC scaleio for block based operations. It has a
> free one too ! And as a block working better then Ceph ;)
>
> Regards
> VM
>
> On 7.01.2018 18:12, "Grégoire Lamodière" <g.lamodi...@dimsi.fr> wrote:
>
>     Hi Ivan,
>
>     Thank you for your quick reply.
>
>     I'll have a look on Ceph and related perfs.
>     As you mentionned, 2 DRDB nfs servers can do the job, but if I can
> avoid using 2 blades for just passing blocks to nfs, this is even better
> (and maintain them as well).
>
>     Thanks for pointing to ceph.
>
>     Grégoire
>
>
>
>
>     ---
>     Grégoire Lamodière
>     T/ + 33 6 76 27 03 31
>     F/ + 33 1 75 43 89 71
>
>     -----Message d'origine-----
>     De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
>     Envoyé : dimanche 7 janvier 2018 15:20
>     À : users@cloudstack.apache.org
>     Objet : Re: KVM storage cluster
>
>     Hi, Grégoire,
>     You could have
>     - local storage if you like, so every compute node could have own
> space (one lun per host)
>     - to have Ceph deployed on the same compute nodes (distribute raw
> devices among nodes)
>     - to dedicate certain node as NFS server (or two servers with DRBD)
>
>     I don't think that shared FS is a good option, even clustered LVM is a
> big pain.
>
>     2018-01-07 21:08 GMT+07:00 Grégoire Lamodière <g.lamodi...@dimsi.fr>:
>
>     > Dear all,
>     >
>     > Since Citrix changed deeply the free version of XenServer 7.3, I am
> in
>     > the process of Pocing moving our Xen clusters to KVM on Centos 7 I
>     > decided to use HP blades connected to HP P2000 over mutipath SAS
> links.
>     >
>     > The network part seems fine to me, not so far from what we used to do
>     > with Xen.
>     > About the storage, I am a little but confused about the shared
>     > mountpoint storage option offerds by CS.
>     >
>     > What would be the good option, in terms of CS, to create a cluster fs
>     > using my SAS array ?
>     > I read somewhere (a Dag SlideShare I think) that GFS2 is the only
>     > clustered FS supported by CS. Is it still correct ?
>     > Does it mean I have to create the GFS2 cluster, make identical mount
>     > conf on all host, and use it on CS as NFS ?
>     > I do not have to add the storage to KVM prior CS zone creation ?
>     >
>     > Thanks a lot for any help / information.
>     >
>     > ---
>     > Grégoire Lamodière
>     > T/ + 33 6 76 27 03 31
>     > F/ + 33 1 75 43 89 71
>     >
>     >
>
>
>     --
>     With best regards, Ivan Kudryavtsev
>     Bitworks Software, Ltd.
>     Cell: +7-923-414-1515
>     WWW: http://bitworks.software/ <http://bw-sw.com/>
>
>
>
>


-- 

Andrija Panić

Reply via email to