I suggest having a look at this thread, which suggests that sizes 'in
between' the requirements of different RocksDB levels have no net
effect, and size accordingly.

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030740.html

My impression is that 28GB is good (L0+L1+L3), or 280 GB is good (+L4
too), or whatever size is required for +L5 is good, but anything in
between will probably not get used.  I've seen this somewhat borne out
with our oldest storage nodes which have only enough NVMe space to
provide 24GB per OSD.  Though only ~3GiB of DB space are in use of the
24GiB available, 1GiB of 'slow db' is used:

"db_total_bytes": 26671570944,
"db_used_bytes": 2801795072,
"slow_used_bytes": 1102053376
(Mimic 13.2.5)

thanks,
Ben


On Tue, May 28, 2019 at 12:55 PM Igor Fedotov <ifedo...@suse.de> wrote:
>
> Hi Jake,
>
> just my 2 cents - I'd suggest to use LVM for DB/WAL to be able
> seamlessly extend their sizes if needed.
>
> Once you've configured this way and if you're able to add more NVMe
> later you're almost free to select any size at the initial stage.
>
>
> Thanks,
>
> Igor
>
>
> On 5/28/2019 4:13 PM, Jake Grimmett wrote:
> > Dear All,
> >
> > Quick question regarding SSD sizing for a DB/WAL...
> >
> > I understand 4% is generally recommended for a DB/WAL.
> >
> > Does this 4% continue for "large" 12TB drives, or can we  economise and
> > use a smaller DB/WAL?
> >
> > Ideally I'd fit a smaller drive providing a 266GB DB/WAL per 12TB OSD,
> > rather than 480GB. i.e. 2.2% rather than 4%.
> >
> > Will "bad things" happen as the OSD fills with a smaller DB/WAL?
> >
> > By the way the cluster will mainly be providing CephFS, fairly large
> > files, and will use erasure encoding.
> >
> > many thanks for any advice,
> >
> > Jake
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to