I would suggest having some flash media for their own OSDs to put the
cephfs metadata pool onto.  That was a pretty significant boost for me when
I moved the metadata pool onto flash media.  My home setup is only 3 nodes
and is running EC 2+1 on pure HDD OSDs with metadata on SSDs.  It's been
running stable and fine for a couple years now.  I wouldn't suggest running
EC 2+1 for any data you can't lose, but I can replace anything in there
with some time.

On Wed, Aug 22, 2018 at 8:43 AM Paul Emmerich <[email protected]>
wrote:

> Not 3+2, but we run 4+2, 6+2, 6+3, 5+3, and 8+3 with cephfs in
> production. Most of them are HDDs without separate DB devices.
>
>
>
> Paul
>
> 2018-08-22 14:27 GMT+02:00 Kevin Olbrich <[email protected]>:
> > Hi!
> >
> > I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to
> > CephFS.
> > This storage is used for backup images (large sequential reads and
> writes).
> >
> > To save space and have a RAIDZ2 (RAID6) like setup, I am planning the
> > following profile:
> >
> > ceph osd erasure-code-profile set myprofile \
> >    k=3 \
> >    m=2 \
> >    ruleset-failure-domain=rack
> >
> > Performance is not the first priority, this is why I do not plan to
> > outsource WAL/DB (broken NVMe = broken OSDs is more administrative
> overhead
> > then single OSDs).
> > Disks are attached by SAS multipath, throughput in general is no problem
> but
> > I did not test with ceph yet.
> >
> > Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is
> it
> > working well?
> >
> > Thank you.
> >
> > Kevin
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90 <+49%2089%20189658590>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to