dbohdan wrote on 09/07/2017 04:52 PM:
Last, I ran the benchmark over the Internet with two machines about 1.89×10^-10 
light years apart. The applications ran on a very humble VPS. Due to its 
humbleness I had to reduce the number of concurrent connections to 25.

The #/sec for each implementation are suspiciously similar. I wonder whether they're limited by something like an accounting limit imposed on the VPS (such as network-bytes-per-second or TCP-connections-open), or by some other host/network limit.

  "places", "many-places", and racket-scgi ran out of memory with as few as 10 
concurrent connections (racket-scgi seemingly due to nginx),

I want to acknowledge this humble-VPS benchmarking being/approximating a real scenario. For example, small embedded/IoT devices communicating via HTTP, or students/hobbyists using "free tier" VPSs/instances.

Just to note for the email list: small devices tend to force us to think about resources earlier and more often than bigger devices do. For example, the difference between "I'm trying to fit the image in this little computer's flash / This little computer can barely boot before we start getting out-of-memory process kills" and "I've just been coding this Web app for three months, and haven't really thought about what size and number of Amazon EC2 instances we'll need for the non-CDN serving." GC is another thing you might feel earlier on a small device.

I think we could probably find a way to serve 25 simultaneous connections via Racket on a pretty small device (maybe even an OpenWrt small home WiFi router, "http://www.neilvandyke.org/racket-openwrt/";). As is often the case on small devices, it takes some upfront decisions of architecture and software, with time&space a high priority, and then often some tuning beyond that.

For purposes of this last humble-VPS benchmarking (if we can keep making more benchmarking work for you), you might get those initial numbers from places/many-places/racket-scgi by setting Racket's memory usage limit. That might force it to GC early and often, and give poorer numbers, but at least it's running (the first priority is to not exhaust system memory, get any processes OOM-killed, deadlock, etc.).

For the racket-scgi + nginx setup, if nginx can't quickly be tuned to not be a problem itself, there are HTTP servers targeting smaller devices, like what OpenWrt uses for its admin interface. But I'd be tempted to whip up a fast and light HTTP server in pure Racket, and to then tackle GC delays and process growth.

(That "tempted" is hypothetical. Personally, any work I did right now would likely be a holistic approach to a particular consulting client's/employer's particular needs. Hopefully, this would contribute back open source Racket packages and knowledge. But the contributions would probably be of the form "here's one way to do X and Y, which well works for our needs, in context with A, B, and C requirements and other architectural decisions", which is usually not the same as "here's an overall near-optimal generalized solution to a familiar class of problem". Unless the client/employer needs are for the generalized solution, or they are altruistic on this.)

--
You received this message because you are subscribed to the Google Groups "Racket 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to