Stack:

Did you take measure of average/mean response times doing your blockcache
> comparison?


Yes, in total I also collected mean 50%, 95%, 99%, and 99.9% latency
values. I only performed the analysis over the 99% in the post. I looked
briefly also at the 99.9% but that wasn't immediately relevant to the
context of the experiment. All of these data are included in the "raw
results" csv I uploaded and linked from the "Showdown" post.

do you need more proof bucketcache subsumes slabcache?


I'd like more vetting, yes. As you alluded to in the previous question, a
more holistic view of response times would be good, and also I'd like to
see how they perform with a mixed workload. Next step is probably to
exercise them with some YSCB workloads of varying RAM:DB ratios.

Todd:

the trend lines drawn on the graphs seem to be based on some assumption
> that there is an exponential scaling pattern.


Which charts are you specifically referring to? Indeed, the trend lines
were generated rather casually with Excel and may be misleading. Perhaps a
more responsible representation would be to simply connect each data point
with a line to aid visibility.

In practice I would think it would be sigmoid [...] As soon as it starts to
> be larger than the cache capacity [...] as the dataset gets larger, the
> latency will level out as a flat line, not continue to grow as your trend
> lines are showing.


When decoupling cache size from database size, you're presumably correct. I
believe that's what's shown in the figures in perfeval_blockcache_v1.pdf,
especially as total memory increases. The plateau effect is suggested in
the 20G and 50G charts in that book. This is why I included the second set
of charts in perfeval_blockcache_v2.pdf. The intention is to couple the
cache size to dataset size and demonstrate how an implementation performs
as the absolute values increase. That is, assuming hit,eviction rate remain
roughly constant, how well does an implementation "scale up" to a larger
memory footprint.

-n

Reply via email to