Re: [users@httpd] mod_ratelimit working by steps ?

2018-05-04 Thread Luca Toscano
Hi everybody,

as part of the plan to add more documentation about httpd's internals I am
trying to debug more tricky bugs (at least for me) reported by our users,
in order for example to better understand how the filters chain works in
depth.

This one caught my attention, and after a bit of testing I have some follow
up questions, if you have time let me know your thoughts :)

2018-04-19 13:47 GMT+02:00 :

> Hello all,
>
> I'm using Apache 2.4.24 on Debian 9 Stable, behind a DSL connection, with
> an estimated upload capacity of ~130kB/s.
> I'm trying to limit the bandwidth available to my users (per-connection
> limit is fine).
> However, it seems to me that the rate-limit parameter is coarsely grained :
>
> - if I set it to 8, users are limited to 8 kB/s
> - if I set it to 20, or 30, users are limited to 40 kB/s
> - if I set it to 50, 60 or 80, users are limited to my BW, so ~120 kB/s
>
>
After following up with the user it seems that the issue happens with
proxied content. So I've set up the following experiment:

- Directory with a 4MB file inside
- Simple Location that proxies content via mod_proxy_http to a Python
process running a webserver, capable of returning the same 4MB file
outlined above.

I tested the rate limit using curl's summary (average Dload speed for
example).

This is what I gathered:

- when httpd serves the file directly, mod_ratelimit's output filter is
called once and the bucket brigade contains all the data contained in the
file. This is probably due to how bucket brigates work when morphing a file
content?

- when httpd serves the file via mod_proxy, the output filter is called
multiple times, and each time the buckets are maximum the size of
ProxyIOBufferSize (8192 by default). Still not completely sure about this
one, so please let me know if I am totally wrong :)

The main problem is, IIUC, in the output's filter logic that does this: it
calculates the size of a chunk, based on the rate-limit set in the httpd's
conf, and then it splits the bucket brigade, if necessary, in buckets of
that chunk size, interleaving them with FLUSH buckets (and sleeping 200ms).

So a trace of execution with say a chunk size of 8192 would be something
like:

First call of the filter: 8192 --> FLUSH --> sleep(200ms) --> 8192 --> ...
-> last chunk (either 8192 or something less).

This happens correctly when httpd serves directly the content, but not when
proxied:

First call of the filter: 8192 -> FLUSH (no sleep, since do_sleep turns to
1 only after the first flush)

Second call of the filter: 8192 -> FLUSH (no sleep)

...

So one way to alleviate this issue is to move do_sleep to the ctx data
structure, so if the filter gets called multiple times it will "remember"
to sleep between flushes (with the assumption that it is allocated for each
request). It remains the problem that when the rate-limit speed sets a
chunk size greater than the ProxyIOBufferSize (8192 by default) then the
client will be rate limited to the speed dictated by the buffer size (for
example, 8192 should correspond to ~40KB/s).

Without the patch of do_sleep in ctx though, as reported by the user, after
some rate-limit there won't be any sleep anymore and hence almost no
ratelimit (only FLUSH buckets that might slow down the overall throughput).

Thanks for reading so far, I hope that what I wrote makes sense. If so, I'd
document this use case in the mod_ratelimit documentation page and possibly
submit a patch, otherwise I'll try to study more following up from your
comments :)

Luca


Re: svn commit: r1830819 - in /httpd/httpd/trunk: ./ docs/log-message-tags/ docs/manual/mod/ modules/ssl/

2018-05-04 Thread Joe Orton
On Thu, May 03, 2018 at 10:20:23PM +0200, Christophe Jaillet wrote:
> Le 03/05/2018 à 15:06, jor...@apache.org a écrit :
> 
> > -SSLCertificateKeyFile file-path
> > +SSLCertificateKeyFile file-path|keyid
> 
> Mixing  and  looks odd, even if the result is the same.
> Just my 2c.

Fixed in r1830879 :)