Just a few corrections, hope you don't mind

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Mike Lovell
> Sent: 20 March 2017 20:30
> To: Webert de Souza Lima <webert.b...@gmail.com>
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] cephfs cache tiering - hitset
> 
> i'm not an expert but here is my understanding of it. a hit_set keeps track of
> whether or not an object was accessed during the timespan of the hit_set.
> for example, if you have a hit_set_period of 600, then the hit_set covers a
> period of 10 minutes. the hit_set_count defines how many of the hit_sets to
> keep a record of. setting this to a value of 12 with the 10 minute
> hit_set_period would mean that there is a record of objects accessed over a
> 2 hour period. the min_read_recency_for_promote, and its newer
> min_write_recency_for_promote sibling, define how many of these hit_sets
> and object must be in before and object is promoted from the storage pool
> into the cache pool. if this were set to 6 with the previous examples, it 
> means
> that the cache tier will promote an object if that object has been accessed at
> least once in 6 of the 12 10-minute periods. it doesn't matter how many
> times the object was used in each period and so 6 requests in one 10-minute
> hit_set will not cause a promotion. it would be any number of access in 6
> separate 10-minute periods over the 2 hours.

Sort of, the recency looks at the last N most recent hitsets. So if set to 6, 
then the object would have to be in all last 6 hitsets. Because of this, during 
testing I found setting recency above 2 or 3 made the behavior quite binary. If 
an object was hot enough, it would probably be in every hitset, if it was only 
warm it would never be in enough hitsets in row. I did experiment with X out of 
N promotion logic, ie must be in 3 hitsets out of 10 non sequential. If you 
could find the right number to configure, you could get improved cache 
behavior, but if not, then there was a large chance it would be worse.

For promotion I think having more hitsets probably doesn't add much value, but 
they may help when it comes to determining what to flush.

> 
> this is just an example and might not fit well for your use case. the systems 
> i
> run have a lower hit_set_period, higher hit_set_count, and higher recency
> options. that means that the osds use some more memory (each hit_set
> takes space but i think they use the same amount of space regardless of
> period) but hit_set covers a smaller amount of time. the longer the period,
> the more likely a given object is in the hit_set. without knowing your access
> patterns, it would be hard to recommend settings. the overhead of a
> promotion can be substantial and so i'd probably go with settings that only
> promote after many requests to an object.

Also in Jewel is a promotion throttle which will limit promotions to 4MB/s

> 
> one thing to note is that the recency options only seemed to work for me in
> jewel. i haven't tried infernalis. the older versions of hammer didn't seem to
> use the min_read_recency_for_promote properly and 0.94.6 definitely had a
> bug that could corrupt data when min_read_recency_for_promote was more
> than 1. even though that was fixed in 0.94.7, i was hesitant to increase it 
> will
> still on hammer. min_write_recency_for_promote wasn't added till after
> hammer.
> 
> hopefully that helps.
> mike
> 
> On Fri, Mar 17, 2017 at 2:02 PM, Webert de Souza Lima
> <mailto:webert.b...@gmail.com> wrote:
> Hello everyone,
> 
> I`m deploying a ceph cluster with cephfs and I`d like to tune ceph cache
> tiering, and I`m
> a little bit confused of the
> settings hit_set_count, hit_set_period and min_read_recency_for_promote.
> The docs are very lean and I can`f find any more detailed explanation
> anywhere.
> 
> Could someone provide me a better understandment of this?
> 
> Thanks in advance!
> 
> _______________________________________________
> ceph-users mailing list
> mailto:ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to