On 02.10.2018 21:21, jes...@krogh.cc wrote:
On 02.10.2018 19:28, jes...@krogh.cc wrote:
In the cephfs world there is no central server that hold the cache. each
cephfs client reads data directly from the osd's.
I can accept this argument, but nevertheless .. if I used Filestore - it
would work.

bluestore is fairly new tho, so if your use case fits filestore better, there is no huge reason not to just use that


This also means no
single point of failure, and you can scale out performance by spreading
metadata tree information over multiple MDS servers. and scale out
storage and throughput with added osd nodes.

so if the cephfs client cache is not sufficient, you can look at at the
bluestore cache.
http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/#cache-size

I have been there, but it seems to "not work"- I think the need to
slice per OSD and statically allocate mem per OSD breaks the efficiency.
(but I cannot prove it)

or you can look at adding a ssd layer over the spinning disks. with egÂ
bcache.  I assume you are using a ssd/nvram for bluestore db already
My currently bluestore(s) is backed by 10TB 7.2K RPM drives, allthough behind
BBWC. Can you elaborate on the "assumption" as we're not doing that, I'd like
to explore that.

https://ceph.com/community/new-luminous-bluestore/
read about "multiple devices"
you can split out the DB part of the bluestore to a faster drive (ssd) many tend to put db's for 4 spinners on a single ssd. the db is the osd metadata, it say where on the block the objects are. and it increases the performance of bluestore significantly.



you should also look at tuning the cephfs metadata servers.
make sure the metadata pool is on fast ssd osd's .  and tune the mds
cache to the mds server's ram, so you cache as much metadata as possible.
Yes, we're in the process of doing that - I belive we're seeing the MDS
suffering
when we saturate a few disks in the setup - and they are sharing. Thus
we'll move
the metadata as per recommendations to SSD.


good luck

Ronny Aasen

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to