Is the high-level issue that: for serving static content over HTTP, you can
use sendfile() from the OS filesystem cache, avoiding extra userspace
copying; but if it's SSL, or any other dynamic filtering of content, you
have to do extra work in userspace?


On Thu, Feb 16, 2017 at 6:01 PM, Yann Ylavic <[email protected]> wrote:

> On Thu, Feb 16, 2017 at 10:51 PM, Jacob Champion <[email protected]>
> wrote:
> > On 02/16/2017 02:49 AM, Yann Ylavic wrote:
> >>
> >> +#define FILE_BUCKET_BUFF_SIZE (64 * 1024 - 64) /* >
> APR_BUCKET_BUFF_SIZE
> >> */
> >
> >
> > So, I had already hacked my O_DIRECT bucket case to just be a copy of
> APR's
> > file bucket, minus the mmap() logic. I tried making this change on top of
> > it...
> >
> > ...and holy crap, for regular HTTP it's *faster* than our current mmap()
> > implementation. HTTPS is still slower than with mmap, but faster than it
> was
> > without the change. (And the HTTPS performance has been really variable.)
> >
> > Can you confirm that you see a major performance improvement with the
> with
> > the new 64K file buffer?
>
> I can't test speed for now (stick with my laptop/localhost, which
> won't be relevant enough I guess).
>
> > I'm pretty skeptical of my own results at this
> > point... but if you see it too, I think we need to make *all* these
> > hard-coded numbers tunable in the config.
>
> We could also improve the apr_bucket_alloc()ator to recycle more
> order-n allocations possibilities (saving as much
> {apr_allocator_,m}alloc() calls), along with configurable/higher
> orders in httpd that'd be great I think.
>
> I can try this patch...
>

Reply via email to