Wow, from 695 requests per second to 49,516 is a huge improvement!

Since we were comparing to django previously, it's now much closer with
django (which does 78,132 rps.)

Django also does run multiple worker processes (3 per cpu):

The other tests are also doing much better.

So do I undestand correctly that the factors that contribute to the
improvement are:

1. running multiple worker processes behind nginx
2. adding content-length to all responses
3. using CS variant of Racket
4. using Racket 7.7
5. tuning nginx config (enabling http 1.1 especially)

I'm curious what was the individual contribution of these factors. In
the PR regarding #5 you already stated that it gives a 4-5x improvement.

#1 and #5 are the things one would normally do anyway in a production
setup I guess. As is #4 most likely.

#2 is something that seems to require manual work in the client code,
but maybe that can be made easier on web-server-lib side somehow.

Curious how much do #3 contribute.

Big thanks again for everyone investing time in investigating this.

Yury Bulka

Bogdan Popa <> writes:

> Small update on this: I've updated the benchmarks to run multiple Racket
> processes with an Nginx load balancer in front.  After some tuning[1], here
> is what the results look like on my 12 core AMD Ryzen 9 3900 server:
> 50k/s is a respectable number for the plaintext benchmark IMO and we
> could get it to go higher if we could ditch Nginx or spend more time
> improving the server's internals, as Sam suggested.
> The `racket-perf' benchmark is for a branch[2] that I have where I've made
> some small improvements to the server's internals.
> [0]:
> [1]:
> [2]:

You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
To view this discussion on the web visit

Reply via email to