Hi Cyril,

On Wed, Nov 22, 2017 at 12:07:09AM +0100, Cyril Bonté wrote:
> Hi Willy and William,
> 
> I ran some tests with the cache filter.
> 
> In http_action_store_cache(), the code indicates that only HTTP/1.1 is
> cached. This explains why I failed on my first tests with apachebench :)
> The protocol version is checked on the request side. Can't we rely on the
> response side instead ?

We could use both, but it's important to avoid the risks of a possibly
bogus client causing trouble by not sending appropriate request headers
for example. Caching has evolved *a lot* since HTTP/1.0 and 1.1 is an
indicator of a reasonably modern client. 1.0 is an indication of a
possibly tricky one. I would not be opposed to relaxing this after some
observation period but for now I'd rather stay conservative, as excess
of caching has even more likeliness to break certain sites than load
balancing alone.

> Btw, here are some first results for those who are interested :
> 
> * WITHOUT cache
> 
> - ab -n100000 -c100 http://localhost/image.gif
> Time taken for tests:   59.127 seconds
> Requests per second:    1691.27 [#/sec] (mean)
> Time per request:       59.127 [ms] (mean)
> Time per request:       0.591 [ms] (mean, across all concurrent requests)
> Transfer rate:          1344.43 [Kbytes/sec] received
> 
> - h2load -n100000 -c100 https://localhost/image.gif
> finished in 60.06s, 1664.91 req/s, 1.11MB/s
> 
> * WITH cache :
> 
> - ab -n100000 -c100 http://localhost/image.gif
> Same results as before, but once patched to rely on the response, we get
> more interesting results :
> Time taken for tests:   1.801 seconds
> Requests per second:    55539.79 [#/sec] (mean)
> Time per request:       1.801 [ms] (mean)
> Time per request:       0.018 [ms] (mean, across all concurrent requests)
> Transfer rate:          44149.79 [Kbytes/sec] received
> 
> - h2load -n100000 -c100 https://localhost/image.gif
> finished in 1.49s, 67210.04 req/s, 44.80MB/s
> 
> for some details :
> - image.gif = 510 bytes
> - haproxy runs locally (the backend is in the same network) with this
> configuration :
(...)

Yes that's really nice. I tested it with HTTP/2 here, and it's even better,
I reached 450k req/s on small objects, and 20 Gbps on "large" ones (12k)!
The again is even more interesting there because for now H2 will imply a
"server-close" mode, so caching will be nice here to limit the amount of
connections to the server.

I'd really like to run a new test with haproxy in front of varnish, so
that haproxy takes care of load-aware consistent hashing and HTTP/2,
and varnish takes care of intelligent caching. Here the cost of the
connection between haproxy and varnish for small objects could easily be
absorbed by haproxy's cache, resulting in an overall higher performance
with the two playing together than any of them individually.

Cheers,
Willy

Reply via email to