On Thursday, March 9, 2017 at 11:42:03 AM UTC-5, Avi Kivity wrote: > > We noticed in ScyllaDB that performance suffers due to high icache miss > rate; we saw a very low IPC. > > > I wonder if we can use an implicit execution stage within the future scheduler via tags. ala
template<typename T, typename Tag=void> future<T>... maybe a new type? -> tagged_future<Tag, VaridaicTypes...> Maybe it doesn't work due to the variadic template of the future<>? Did you guys play w/ the size of the batches of the function call group... i.e.: how perf changes from say 10 consecutive calls vs 100 vs 1000. As in, did it have a major negative effect on the IO loop? - just wondering. Thanks for sharing. I have to add this to my seastar work too. I guess you found this more useful around the work on scylla on: Compressing + writing to disk ... or did you have any other insight as to where in scylla it would be most helpful. Trying to understand how to apply this to some of my seastar stuff. Thanks for sharing. > We implemented a SEDA-like mechanism to run tasks tied to the same > function sequentially; the first task warms up icache and the branch > predictors, the second and further runs benefit from this warmup. > > > The implementation of the mechanism in seastar can be viewed here: > > https://github.com/scylladb/seastar/commit/384c81ba7227a9a99d485d1bb68c98c5f3a6b209 > > > > Usage in ScyllaDB, with some performance numbers, can be viewed here: > > https://github.com/scylladb/scylla/commit/efd96a448cca4499fd40df8b3df3f0f8444a1464. > > > Microbenchmarks see almost 200% improvement while full-system tests see > around 50% improvement, mostly due to improved IPC. > > > > -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
