On Wed, Aug 22, 2018 at 1:28 PM Kevin Olbrich <[email protected]> wrote:
>
> Hi!
>
> I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to CephFS.
> This storage is used for backup images (large sequential reads and writes).
>
> To save space and have a RAIDZ2 (RAID6) like setup, I am planning the 
> following profile:
>
> ceph osd erasure-code-profile set myprofile \
>    k=3 \
>    m=2 \
>    ruleset-failure-domain=rack
>
> Performance is not the first priority, this is why I do not plan to outsource 
> WAL/DB (broken NVMe = broken OSDs is more administrative overhead then single 
> OSDs).
> Disks are attached by SAS multipath, throughput in general is no problem but 
> I did not test with ceph yet.
>
> Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is it 
> working well?

I have a very small home cluster that's 6x OSDs over 3 nodes, using EC
on bluestore on spinning disks.  I don't have benchmarks, but it was
usable for a few TB of backups.

John

>
> Thank you.
>
> Kevin
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to