Two things about this surprised me greatly:

1) That batching function calls using futures like this, with the effect of
a hot icache and slightly cooler dcache in just a few parts of ScyllaDB led
to a 50% speedup. I would have never thought it would make such a
difference.
2) Your codebase was already using futures, so the changes to ScyllaDB
itself were minimal. Very nicely done.

I will keep this technique, and Seastar itself, in mind for future
applications. Thanks for sharing!

On Thu, Mar 9, 2017 at 8:41 AM, Avi Kivity <[email protected]> wrote:

> We noticed in ScyllaDB that performance suffers due to high icache miss
> rate; we saw a very low IPC.
>
>
> We implemented a SEDA-like mechanism to run tasks tied to the same
> function sequentially; the first task warms up icache and the branch
> predictors, the second and further runs benefit from this warmup.
>
>
> The implementation of the mechanism in seastar can be viewed here:
> https://github.com/scylladb/seastar/commit/384c81ba7227a9a99
> d485d1bb68c98c5f3a6b209
>
>
> Usage in ScyllaDB, with some performance numbers, can be viewed here:
> https://github.com/scylladb/scylla/commit/efd96a448cca4499fd
> 40df8b3df3f0f8444a1464. Microbenchmarks see almost 200% improvement while
> full-system tests see around 50% improvement, mostly due to improved IPC.
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to