Vickey,
OSDs are on top of filesystem and those unused memory will be automatically 
part of paged cache by filesystem.
But, the read performance improvement depends on the pattern application is 
reading data and the size of working set.
Sequential pattern will benefit most (you may need to tweak read_ahead_kb to 
bigger values). For random workload also you will get benefit if the working 
set is not too big. For example, a LUN of say 1 TB and aggregated OSD page 
cache of say of 200GB will benefit more than a LUN of say 100TB with the 
similar amount of page cache (considering a true random pattern).

Thanks & Regards
Somnath

From: ceph-users [mailto:[email protected]] On Behalf Of Vickey 
Singh
Sent: Monday, September 07, 2015 2:19 PM
To: [email protected]
Subject: [ceph-users] Extra RAM use as Read Cache

Hello Experts ,

I want to increase my Ceph cluster's read performance.

I have several OSD nodes having 196G RAM. On my OSD nodes Ceph just uses 15-20 
GB of RAM.

So, can i instruct Ceph to make use of the remaining 150GB+ RAM as read cache. 
So that it should cache data in RAM and server to clients very fast.

I hope if this can be done, i can get a good read performance boost.


By the way we have a LUSTRE cluster , that uses extra RAM as read cache and we 
can get upto 2.5GBps read performance.  I am looking someone to do with Ceph.

- Vickey -





________________________________

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to