On Thu, Feb 23, 2017 at 7:16 PM, Jacob Champion <champio...@gmail.com> wrote:
> On 02/23/2017 08:34 AM, Yann Ylavic wrote:
>> Actually I'm not very pleased with this solution (or the final one
>> that would make this size open / configurable).
>> The issue is potentially the huge (order big-n) allocations which
>> finally may hurt the system (fragmentation, OOM...).
>
> Power users can break the system, and this is a power tool, right?

It's not about power users, I don't think we can recommend anyone to
use 4MB buffers even if they (seem to) have RAM for it.

> And we
> have HugeTLB kernels and filesystems to play with, with 2MB and bigger
> pages... Making all these assumptions for 90% of users is perfect for the
> out-of-the-box experience, but hardcoding them so that no one can fix broken
> assumptions seems Bad.

I'm not talking about hardcoding anything, nor reading or sendind hard
limited sizes on filesystem/sockets.
I'm proposing that the configuration relates to how much we "coalesce"
data on output, and all buffers' reuses (though each of fixed size)
should follow the needs.

>
> (And don't get me wrong, I think applying vectored I/O to the brigade would
> be a great thing to try out and benchmark. I just think it's a long-term and
> heavily architectural fix, when a short-term change to get rid of some
> #defined constants could have immediate benefits.)

Of course, apr_bucket_file_set_read_size() is a quick patch (I dispute
it for the general case, but it may be useful, or not, for the 16K SSL
case, let's see), and so is another for configuring core_output_filter
constants, but we don't need them for testing right?

>
>> I've no idea how much it costs to have 8K vs 16K records, though.
>> Maybe in the mod_ssl case we'd want 16K buffers, still reasonable?
>
>
> We can't/shouldn't hardcode this especially. People who want maximum
> throughput may want nice big records, but IIRC users who want progressive
> rendering need smaller records so that they don't have to wait as long for
> the first decrypted chunk. It needs to be tunable, possibly per-location.

Well, the limit is 16K at the TLS level.


Regards,
Yann.

Reply via email to