> Am 20.01.2017 um 03:57 schrieb Kyriakos Zarifis <kyr.zari...@gmail.com>: > > * ... and by "here" I meant "here"
:) > On Thu, Jan 19, 2017 at 6:54 PM, Kyriakos Zarifis <kyr.zari...@gmail.com> > wrote: > Sounds great! > > > Very interested. I'd like to add a page over at > https://icing.github.io/mod_h2/ about it, so that people can easily grasp > what the advantages are. For that, your numbers (do you have screenshots of > browser timelines maybe?) would be very welcome. Also that someone besides > the module author has measured it adds credibility. :-) > > If you write yourself somewhere about it, I am happy to link that. > > Since anything I write would be incomplete without your description of what > caused it and how you resolved it, I put together a WIP write up here, with > screenshots / link to logs*. Feel free to use it as you want or let me know > if you'd like more details; I'm happy to help write the complete story for a > page on https://icing.github.io/mod_h2/ , which is probably the most > reasonable place to gather the relevant bits. > > Cheers > > > > * I rerun the tests to capture timeline screenshots, so the server-logs don't > exactly correspond to those screenshots, but the behaviors were the same > * Note that the delays in the timeline pictures are worse than those seen on > server logs, which have been more helpful for understanding application-layer > behavior. I think what's causing this is a bloated output buffer in the case > where the server aggressively writes low-prio data (verified this by > monitoring the buffer size which keeps increasing during the test) Certainly an area to improve upon. mod_http2 is still writing so much into the socket that responsiveness suffers. This gives the best throughput performance, though. I know that the h2o server guys also experimented with interrogating TCP windows to prevent bloat. Have to look at that again. Cheers, Stefan Eissing <green/>bytes GmbH Hafenstrasse 16 48155 Münster www.greenbytes.de