On Thu, 23 Feb 2017, Yann Ylavic wrote:
Technically, Yann's patch doesn't redefine APR_BUCKET_BUFF_SIZE, it just
defines a new buffer size for use with the file bucket. It's a little less
than 64K, I assume to make room for an allocation header:
#define FILE_BUCKET_BUFF_SIZE (64 * 1024 - 64)
Actually I'm not very pleased with this solution (or the final one
that would make this size open / configurable).
The issue is potentially the huge (order big-n) allocations which
finally may hurt the system (fragmentation, OOM...).
Is this a real or theoretical problem?
Our large-file cache module does 128k allocs to get a sane block size
when copying files to the cache. The only potential drawback we
noticed was httpd processes becoming bloated due to the default
MaxMemFree 2048, so we're running with MaxMemFree 256 now. I don't
know if things got much better, but it isn't breaking anything
either...
Granted, doing alloc/free for all outgoing data means way more
alloc/free:s, so we might just miss the issues because cache fills
aren't as common.
However, for large file performance I really don't buy into the notion
that it's a good idea to break everything into tiny puny blocks. The
potential for wasting CPU cycles on this micro-management is rather
big...
I can see it working for a small-file workload where files aren't much
bigger than tens of kB anyway, but not so much for large-file
delivery.
A prudent way forward might be to investigate what impact different
block sizes have wrt ssl/https first.
As networking speeds go up it is kind of expected that block sizes
needs to go up as well, especially as per-core clock frequency isn't
increasing much (it's been at 2-ish GHz base frequency for server CPUs
the last ten years now?) and we're relying more and more on various
offload mechanisms in CPUs/NICs etc to get us from 1 Gbps to 10 Gbps
to 100 Gbps ...
I do find iovecs useful, it the small blocks that gets me into skeptic
mode...
Kinda related: We also have the support for larger page sizes with
modern CPUs. Has anyone investigated if it makes sense allocating
memory pools in chunks that fit those large pages?
/Nikke
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se | ni...@acc.umu.se
---------------------------------------------------------------------------
You need not worry about your future....
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=