Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
dbohdan wrote on 09/09/2017 02:40 PM: When I limit the memory usage in racket-custom to the total RAM on the VPS minus what the OS uses (through custodian-limit-memory) Racket quits with an out of memory error at the point when it would be killed by the OS. racket-scgi seems to behave the same, though I didn't look at the memory usage split between Racket and nginx when I tested it. Especially for small devices pushed to the limit, like this benchmark is approximating... We can manage process size in Racket so that it doesn't get OOM-killed or crash on a failed allocation at a bad time in the Racket VM. This can be via smaller limits within Racket, the timing of GC, application code being savvy about allocations, maybe something with Racket Places, or being creative with some of the properties of Linux host processes. For the racket-scgi + nginx setup, if nginx can't quickly be tuned to not be a problem itself, there are HTTP servers targeting smaller devices, like what OpenWrt uses for its admin interface. But do they support SCGI? I used Lighttpd several years ago, which supports SCGI, though I don't know the current resource footprint. (I used Lighthttpd as a tiny Web server within each cloned Windows image in an research virtualization experimental testbed, and it worked fine for that light purpose.) For Racket Web serving on *small* devices, I'd want to try a lightweight, hand-optimized HTTP server in pure Racket, not put Nginx/Apache/etc. and SCGI/FastCGI in front of it. Nginx and Apache might not be carrying their own weight on a small device, for the kinds of applications I'd expect on a small device (unless you need to implement an organization's complex, custom SSO authentication method, and there's an Apache module). Other reasons for a fronting server don't usually apply to small devices: serving high-volume static content from the same host/port, using off-the-shelf load balancing, and possibly an off-the-shelf attempt at enduring DoS. Racket's I/O and language are sophisticated enough that you can do some clever performance things, and maybe that's how you make a particular application on a particular device viable. Your benchmarking work has been good for getting some interest and discussion going. Racket made a good showing in some of the benchmarks already, but these aren't going to show off the best that can be done in Racket, since a lot of space is yet to be explored. Two ways to move forward: (1) work on individual real-world applications, incidentally advancing Racket's capabilities in ways that transfer to some other applications; and (2) a priori generalized effort like "we want to make a Racket solution for many simultaneous clients of trivial/nontrivial Web services on small devices, that will usually do what people need, out of the box", and/or similar effort for large scale Web services/applications. The Racket community's skill base is capable of both #1 and #2 above. But, for funding reasons (I suspect hard to find a research angle on #2, unless it involves something novel and big with the Racket backend), I suspect that an organic #1 is more likely than #2. A possible exception in favor of #2 is if someone has the hobby time available to go to war on pure benchmarks, without a motivating/guiding application (and I could certainly appreciate the appeal of that, when one has the time). -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Friday, September 8, 2017 at 1:09:19 PM UTC+3, Jay McCarthy wrote: > Wow! Thanks for all of this work. It is really interesting to see how > different the performance is on the Internet workload! Once again, you're welcome! See my reply to Neil Van Dyke for some reasoning about the Internet workload and more results. -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Friday, September 8, 2017 at 4:29:34 PM UTC+3, Neil Van Dyke wrote: > dbohdan wrote on 09/07/2017 04:52 PM: > > The #/sec for each implementation are suspiciously similar. I wonder > whether they're limited by something like an accounting limit imposed on > the VPS (such as network-bytes-per-second or TCP-connections-open), or > by some other host/network limit. > While the VPS provider does impose a limit on throughput, at approximately 250 req/s * 5 KB/req = 1.25 MB/s I wasn't hitting it. The numbers were very similar for different applications because at 25 concurrent connections no application reached the maximum request rate it could sustain. I thought the memory constraints wouldn't allow for more than about 25 connections, but I was mistaken. With some tuning I was able to get the applications that ran with 25 concurrent connections to run with 50 and 100. I've rerun the benchmark with 1, 25, 50, 100, and 200 connections to show how the differences between the applications emerge. == CONNECTIONS=1 remote-results/caddy.txt:Requests per second:12.57 [#/sec] (mean) remote-results/compojure.txt:Requests per second:12.41 [#/sec] (mean) remote-results/custom-many-places.txt:Requests per second:12.62 [#/sec] (mean) remote-results/custom-many.txt:Requests per second:12.55 [#/sec] (mean) remote-results/custom-places.txt:Requests per second:12.56 [#/sec] (mean) remote-results/custom-single.txt:Requests per second:12.58 [#/sec] (mean) remote-results/flask.txt:Requests per second:12.44 [#/sec] (mean) remote-results/guile.txt:Requests per second:12.53 [#/sec] (mean) remote-results/plug.txt:Requests per second:12.57 [#/sec] (mean) remote-results/scgi.txt:Requests per second:12.46 [#/sec] (mean) remote-results/sinatra.txt:Requests per second:12.08 [#/sec] (mean) remote-results/stateful.txt:Requests per second:12.42 [#/sec] (mean) remote-results/stateless.txt:Requests per second:12.41 [#/sec] (mean) == == CONNECTIONS=25 remote-results/caddy.txt:Requests per second:311.19 [#/sec] (mean) remote-results/compojure.txt:Requests per second:309.69 [#/sec] (mean) remote-results/custom-many-places.txt:(Killed) Total of 9153 requests completed remote-results/custom-many.txt:Requests per second:309.63 [#/sec] (mean) remote-results/custom-places.txt:(Killed) Total of 13085 requests completed remote-results/custom-single.txt:Requests per second:308.02 [#/sec] (mean) remote-results/flask.txt:Requests per second:310.91 [#/sec] (mean) remote-results/guile.txt:Requests per second:310.28 [#/sec] (mean) remote-results/plug.txt:Requests per second:313.60 [#/sec] (mean) remote-results/sinatra.txt:Requests per second:287.03 [#/sec] (mean) remote-results/stateful.txt:Requests per second:298.05 [#/sec] (mean) remote-results/stateless.txt:Requests per second:295.90 [#/sec] (mean) == == CONNECTIONS=50 remote-results/caddy.txt:Requests per second:594.78 [#/sec] (mean) remote-results/compojure.txt:Requests per second:604.64 [#/sec] (mean) remote-results/custom-many-places.txt:(Killed) Total of 9444 requests completed remote-results/custom-many.txt:Requests per second:598.88 [#/sec] (mean) remote-results/custom-places.txt:(Killed) Total of 13088 requests completed remote-results/custom-single.txt:Requests per second:591.44 [#/sec] (mean) remote-results/flask.txt:Requests per second:605.75 [#/sec] (mean) remote-results/guile.txt:Requests per second:612.28 [#/sec] (mean) remote-results/plug.txt:Requests per second:617.95 [#/sec] (mean) remote-results/scgi.txt:(Killed) Total of 12020 requests completed remote-results/sinatra.txt:Requests per second:367.58 [#/sec] (mean) remote-results/stateful.txt:Requests per second:530.00 [#/sec] (mean) remote-results/stateless.txt:Requests per second:546.76 [#/sec] (mean) == == CONNECTIONS=100 remote-results/caddy.txt:Requests per second:1016.63 [#/sec] (mean) remote-results/compojure.txt:Requests per second:1103.84 [#/sec] (mean) remote-results/custom-many-places.txt:(Killed) Total of 9908 requests completed remote-results/custom-many.txt:Requests per second:1140.40 [#/sec] (mean) remote-results/custom-places.txt:(Killed) Total of 13081 requests completed remote-results/custom-single.txt:Requests per second:1134.93 [#/sec] (mean) remote-results/flask.txt:Requests per second:1024.25 [#/sec] (mean) remote-results/guile.txt:Requests per second:1085.03 [#/sec] (mean) remote-results/plug.txt:Requests per second:1140.43 [#/sec] (mean) remote-results/scgi.txt:(Killed) Total of 10969 requests completed remote-results/sinatra.txt:Requests per second:384.41 [#/sec] (mean) remote-results/stateful.txt:Requests per second:726.84 [#/sec] (mean) remote-results/stateless.txt:Requests per second:682.58 [#/sec] (mean) == == CONNECTIONS=200 remote-results/caddy.txt:Requests per second:1093.88
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
dbohdan wrote on 09/07/2017 04:52 PM: Last, I ran the benchmark over the Internet with two machines about 1.89×10^-10 light years apart. The applications ran on a very humble VPS. Due to its humbleness I had to reduce the number of concurrent connections to 25. The #/sec for each implementation are suspiciously similar. I wonder whether they're limited by something like an accounting limit imposed on the VPS (such as network-bytes-per-second or TCP-connections-open), or by some other host/network limit. "places", "many-places", and racket-scgi ran out of memory with as few as 10 concurrent connections (racket-scgi seemingly due to nginx), I want to acknowledge this humble-VPS benchmarking being/approximating a real scenario. For example, small embedded/IoT devices communicating via HTTP, or students/hobbyists using "free tier" VPSs/instances. Just to note for the email list: small devices tend to force us to think about resources earlier and more often than bigger devices do. For example, the difference between "I'm trying to fit the image in this little computer's flash / This little computer can barely boot before we start getting out-of-memory process kills" and "I've just been coding this Web app for three months, and haven't really thought about what size and number of Amazon EC2 instances we'll need for the non-CDN serving." GC is another thing you might feel earlier on a small device. I think we could probably find a way to serve 25 simultaneous connections via Racket on a pretty small device (maybe even an OpenWrt small home WiFi router, "http://www.neilvandyke.org/racket-openwrt/;). As is often the case on small devices, it takes some upfront decisions of architecture and software, with time a high priority, and then often some tuning beyond that. For purposes of this last humble-VPS benchmarking (if we can keep making more benchmarking work for you), you might get those initial numbers from places/many-places/racket-scgi by setting Racket's memory usage limit. That might force it to GC early and often, and give poorer numbers, but at least it's running (the first priority is to not exhaust system memory, get any processes OOM-killed, deadlock, etc.). For the racket-scgi + nginx setup, if nginx can't quickly be tuned to not be a problem itself, there are HTTP servers targeting smaller devices, like what OpenWrt uses for its admin interface. But I'd be tempted to whip up a fast and light HTTP server in pure Racket, and to then tackle GC delays and process growth. (That "tempted" is hypothetical. Personally, any work I did right now would likely be a holistic approach to a particular consulting client's/employer's particular needs. Hopefully, this would contribute back open source Racket packages and knowledge. But the contributions would probably be of the form "here's one way to do X and Y, which well works for our needs, in context with A, B, and C requirements and other architectural decisions", which is usually not the same as "here's an overall near-optimal generalized solution to a familiar class of problem". Unless the client/employer needs are for the generalized solution, or they are altruistic on this.) -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
Wow! Thanks for all of this work. It is really interesting to see how different the performance is on the Internet workload! On Thu, Sep 7, 2017 at 9:52 PM, dbohdanwrote: > On Tuesday, September 5, 2017 at 12:41:04 PM UTC+3, Jay McCarthy wrote: >> Is the benchmarking client core the same core as the server core? >> Could that help explain why single threaded performance is best? > > The not-quite-yes-or-no answer is that they were limited to separate virtual > cores inside a VirtualBox VM. When a VirtualBox VM has N virtual cores on a > physical CPU with M cores, it can use roughly up to N/M of the CPU's > resources. In that benchmark the applications, which had access to two > virtual cores, had to themselves 50% of the four-core physical CPU, and the > load generator had 25% with one virtual core. > > I've run three variants of the benchmark to see if running in a VM had a > noticeable effect on single-threaded vs. multi-threaded performance and to > address your question about whether "many" underperformed because of a > virtual network. > > First, I got rid of the VM. Both the applications and the load generator ran > in containers on the same machine, but not in a VM. This meant they were > limited to different physical cores. The results were similar to those in a > VM. The numbers were lower overall due to the slightly weaker hardware. > > == > results/caddy.txt:Requests per second:2758.06 [#/sec] (mean) > results/compojure.txt:Requests per second:2670.11 [#/sec] (mean) > results/custom-many-places.txt:Requests per second:4326.27 [#/sec] (mean) > results/custom-many.txt:Requests per second:4655.04 [#/sec] (mean) > results/custom-places.txt:Requests per second:4584.75 [#/sec] (mean) > results/custom-single.txt:Requests per second:5191.93 [#/sec] (mean) > results/flask.txt:Requests per second:.25 [#/sec] (mean) > results/guile.txt:Requests per second:1933.10 [#/sec] (mean) > results/plug.txt:Requests per second:3346.99 [#/sec] (mean) > results/scgi.txt:Requests per second:2092.03 [#/sec] (mean) > results/sinatra.txt:Requests per second:293.60 [#/sec] (mean) > results/stateful.txt:Requests per second:532.61 [#/sec] (mean) > results/stateless.txt:Requests per second:625.02 [#/sec] (mean) > == > > Second, I ran the benchmark over a gigabit local network. Yesterday I pushed > a script for this (`remote-benchmark.exp`) to the repository. The > applications ran on one machine (in a Docker container with access to two > virtual cores). The load generator ran on another (my laptop). > > == > remote-results/caddy.txt-Requests per second:3119.23 [#/sec] (mean) > remote-results/compojure.txt-Requests per second:4009.71 [#/sec] (mean) > remote-results/custom-many-places.txt-Requests per second:4409.48 [#/sec] > (mean) > remote-results/custom-many.txt-Requests per second:5499.20 [#/sec] (mean) > remote-results/custom-places.txt-Requests per second:5072.63 [#/sec] > (mean) > remote-results/custom-single.txt-Requests per second:6246.09 [#/sec] > (mean) > remote-results/flask.txt-Requests per second:1106.43 [#/sec] (mean) > remote-results/guile.txt-Requests per second:2062.53 [#/sec] (mean) > remote-results/plug.txt-Requests per second:4034.74 [#/sec] (mean) > remote-results/scgi.txt-Requests per second:2046.91 [#/sec] (mean) > remote-results/sinatra.txt-Requests per second:288.52 [#/sec] (mean) > remote-results/stateful.txt-Requests per second:542.27 [#/sec] (mean) > remote-results/stateless.txt-Requests per second:614.18 [#/sec] (mean) > == > > In both cases the ordering is still "single" > "many" > "places" > > "many-places". Though "many" and "places" are pretty close in the first case, > "many" consistently comes out ahead if you retest. > > Last, I ran the benchmark over the Internet with two machines about > 1.89×10^-10 light years apart. The applications ran on a very humble VPS. Due > to its humbleness I had to reduce the number of concurrent connections to 25. > "places", "many-places", and racket-scgi ran out of memory with as few as 10 > concurrent connections (racket-scgi seemingly due to nginx), so I decided to > exclude them rather than reduce the number of connections further. > > == >> env CONNECTIONS=25 ./remote-benchmark.exp vps > remote-results/caddy.txt:Requests per second:231.37 [#/sec] (mean) > remote-results/compojure.txt:Requests per second:242.41 [#/sec] (mean) > remote-results/custom-many.txt:Requests per second:250.35 [#/sec] (mean) > remote-results/custom-single.txt:Requests per second:255.21 [#/sec] (mean) > remote-results/flask.txt:Requests per second:235.26 [#/sec] (mean) > remote-results/guile.txt:Requests per second:242.38 [#/sec] (mean) > remote-results/plug.txt:Requests per second:244.98 [#/sec] (mean) > remote-results/sinatra.txt:Requests per second:239.78 [#/sec] (mean)
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Thu, Sep 7, 2017 at 4:52 PM, dbohdanwrote: > > In both cases the ordering is still "single" > "many" > "places" > > "many-places". Though "many" and "places" are pretty close in the first case, > "many" consistently comes out ahead if you retest. This is really interesting. I wonder how costly the inter-place communication is, relative to the cost of actually generating and sending the response. -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Tuesday, September 5, 2017 at 12:41:04 PM UTC+3, Jay McCarthy wrote: > Is the benchmarking client core the same core as the server core? > Could that help explain why single threaded performance is best? The not-quite-yes-or-no answer is that they were limited to separate virtual cores inside a VirtualBox VM. When a VirtualBox VM has N virtual cores on a physical CPU with M cores, it can use roughly up to N/M of the CPU's resources. In that benchmark the applications, which had access to two virtual cores, had to themselves 50% of the four-core physical CPU, and the load generator had 25% with one virtual core. I've run three variants of the benchmark to see if running in a VM had a noticeable effect on single-threaded vs. multi-threaded performance and to address your question about whether "many" underperformed because of a virtual network. First, I got rid of the VM. Both the applications and the load generator ran in containers on the same machine, but not in a VM. This meant they were limited to different physical cores. The results were similar to those in a VM. The numbers were lower overall due to the slightly weaker hardware. == results/caddy.txt:Requests per second:2758.06 [#/sec] (mean) results/compojure.txt:Requests per second:2670.11 [#/sec] (mean) results/custom-many-places.txt:Requests per second:4326.27 [#/sec] (mean) results/custom-many.txt:Requests per second:4655.04 [#/sec] (mean) results/custom-places.txt:Requests per second:4584.75 [#/sec] (mean) results/custom-single.txt:Requests per second:5191.93 [#/sec] (mean) results/flask.txt:Requests per second:.25 [#/sec] (mean) results/guile.txt:Requests per second:1933.10 [#/sec] (mean) results/plug.txt:Requests per second:3346.99 [#/sec] (mean) results/scgi.txt:Requests per second:2092.03 [#/sec] (mean) results/sinatra.txt:Requests per second:293.60 [#/sec] (mean) results/stateful.txt:Requests per second:532.61 [#/sec] (mean) results/stateless.txt:Requests per second:625.02 [#/sec] (mean) == Second, I ran the benchmark over a gigabit local network. Yesterday I pushed a script for this (`remote-benchmark.exp`) to the repository. The applications ran on one machine (in a Docker container with access to two virtual cores). The load generator ran on another (my laptop). == remote-results/caddy.txt-Requests per second:3119.23 [#/sec] (mean) remote-results/compojure.txt-Requests per second:4009.71 [#/sec] (mean) remote-results/custom-many-places.txt-Requests per second:4409.48 [#/sec] (mean) remote-results/custom-many.txt-Requests per second:5499.20 [#/sec] (mean) remote-results/custom-places.txt-Requests per second:5072.63 [#/sec] (mean) remote-results/custom-single.txt-Requests per second:6246.09 [#/sec] (mean) remote-results/flask.txt-Requests per second:1106.43 [#/sec] (mean) remote-results/guile.txt-Requests per second:2062.53 [#/sec] (mean) remote-results/plug.txt-Requests per second:4034.74 [#/sec] (mean) remote-results/scgi.txt-Requests per second:2046.91 [#/sec] (mean) remote-results/sinatra.txt-Requests per second:288.52 [#/sec] (mean) remote-results/stateful.txt-Requests per second:542.27 [#/sec] (mean) remote-results/stateless.txt-Requests per second:614.18 [#/sec] (mean) == In both cases the ordering is still "single" > "many" > "places" > "many-places". Though "many" and "places" are pretty close in the first case, "many" consistently comes out ahead if you retest. Last, I ran the benchmark over the Internet with two machines about 1.89×10^-10 light years apart. The applications ran on a very humble VPS. Due to its humbleness I had to reduce the number of concurrent connections to 25. "places", "many-places", and racket-scgi ran out of memory with as few as 10 concurrent connections (racket-scgi seemingly due to nginx), so I decided to exclude them rather than reduce the number of connections further. == > env CONNECTIONS=25 ./remote-benchmark.exp vps remote-results/caddy.txt:Requests per second:231.37 [#/sec] (mean) remote-results/compojure.txt:Requests per second:242.41 [#/sec] (mean) remote-results/custom-many.txt:Requests per second:250.35 [#/sec] (mean) remote-results/custom-single.txt:Requests per second:255.21 [#/sec] (mean) remote-results/flask.txt:Requests per second:235.26 [#/sec] (mean) remote-results/guile.txt:Requests per second:242.38 [#/sec] (mean) remote-results/plug.txt:Requests per second:244.98 [#/sec] (mean) remote-results/sinatra.txt:Requests per second:239.78 [#/sec] (mean) remote-results/stateful.txt:Requests per second:239.60 [#/sec] (mean) remote-results/stateless.txt:Requests per second:238.71 [#/sec] (mean) == -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Tuesday, September 5, 2017 at 3:17:38 AM UTC-7, Piyush Katariya wrote: > Wow. ~7K looks like good number. > > Is it common practice to spawn Thread for each request ? Is it that cheap > from resource point of view ? can ThreadPool could be of some help here ? Racket threads are not OS threads. They're "green threads" and are cooperatively scheduled by the Racket runtime. They're very cheap to create, even with a short life span. -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
Wow. ~7K looks like good number. Is it common practice to spawn Thread for each request ? Is it that cheap from resource point of view ? can ThreadPool could be of some help here ? -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
Is the benchmarking client core the same core as the server core? Could that help explain why single threaded performance is best? On Tue, Sep 5, 2017 at 10:00 AM, dbohdanwrote: > On Tuesday, September 5, 2017 at 11:41:46 AM UTC+3, dbohdan wrote: >> I'll try this again with two fixed cores available to the application >> container. > > results/custom-many-places.txt:Requests per second:6517.83 [#/sec] (mean) > results/custom-many.txt:Requests per second:7949.04 [#/sec] (mean) > results/custom-places.txt:Requests per second:7521.15 [#/sec] (mean) > results/custom-single.txt:Requests per second:8675.64 [#/sec] (mean) > > -- > You received this message because you are subscribed to the Google Groups > "Racket Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to racket-users+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- -=[ Jay McCarthy http://jeapostrophe.github.io]=- -=[ Associate ProfessorPLT @ CS @ UMass Lowell ]=- -=[ Moses 1:33: And worlds without number have I created; ]=- -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Tuesday, September 5, 2017 at 11:41:46 AM UTC+3, dbohdan wrote: > I'll try this again with two fixed cores available to the application > container. results/custom-many-places.txt:Requests per second:6517.83 [#/sec] (mean) results/custom-many.txt:Requests per second:7949.04 [#/sec] (mean) results/custom-places.txt:Requests per second:7521.15 [#/sec] (mean) results/custom-single.txt:Requests per second:8675.64 [#/sec] (mean) -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Tuesday, September 5, 2017 at 10:57:17 AM UTC+3, Jay McCarthy wrote: > I've just tested on Linux and OS X and I don't see that behavior. I'm > quite confused. Yes, scratch what I said. The "many-places" benchmark only fails this way for me on a particular Linux VM, which just so happened to be the one I was testing it on. Maybe I got the VM in a bad state. If the problem is meaningfully related to the benchmarked application, I'll follow up on it. Meanwhile, here are some benchmark results for "many-places". The transferred data sizes suggest it worked correctly. == results/custom-many-places.txt:Requests per second:4931.29 [#/sec] (mean) results/custom-many.txt:Requests per second:6449.73 [#/sec] (mean) results/custom-places.txt:Requests per second:7325.81 [#/sec] (mean) results/custom-single.txt:Requests per second:7793.91 [#/sec] (mean) == I'll try this again with two fixed cores available to the application container. -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Tue, Sep 5, 2017 at 8:07 AM, dbohdanwrote: > == > results/custom-many.txt:Requests per second:6720.43 [#/sec] (mean) > results/custom-places.txt:Requests per second:7095.99 [#/sec] (mean) > results/custom-single.txt:Requests per second:7609.11 [#/sec] (mean) > == That is interesting too. The places version serves one request at a time on each place, so there's some parallelism but each place does its work serially. > As for "many-places", I was mistaken about it running out of file > descriptors. I accidentally tested "places" in its stead. As-is > (https://gitlab.com/dbohdan/racket-vs-the-world/blob/97dd7858aecab9af2a66ed687d12ce45adb4899d/apps/racket-custom/lipsum.rkt), > "many-places" does not send anything to incoming connections and never > closes them. I've just tested on Linux and OS X and I don't see that behavior. I'm quite confused. -- -=[ Jay McCarthy http://jeapostrophe.github.io]=- -=[ Associate ProfessorPLT @ CS @ UMass Lowell ]=- -=[ Moses 1:33: And worlds without number have I created; ]=- -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Tuesday, September 5, 2017 at 8:50:27 AM UTC+3, Jon Zeppieri wrote: > (tcp-abandon-port r) > (tcp-abandon-port w) You're right. This worked for "places". I've rerun "single" and "many" along with "places". == results/custom-many.txt:Requests per second:6720.43 [#/sec] (mean) results/custom-places.txt:Requests per second:7095.99 [#/sec] (mean) results/custom-single.txt:Requests per second:7609.11 [#/sec] (mean) == As for "many-places", I was mistaken about it running out of file descriptors. I accidentally tested "places" in its stead. As-is (https://gitlab.com/dbohdan/racket-vs-the-world/blob/97dd7858aecab9af2a66ed687d12ce45adb4899d/apps/racket-custom/lipsum.rkt), "many-places" does not send anything to incoming connections and never closes them. On Tuesday, September 5, 2017 at 9:01:14 AM UTC+3, Jay McCarthy wrote: > Yes, that's good. All right. > It is really surprising to me that the many version doesn't perform > better, because I assumed that there would be IO delays on one > connection and you wouldn't want to stall others while waiting to > read/write it. Presumably this is a bit of an artifact of the > benchmarking happening on localhost? I was wondering about the reason myself. To tease it out, I'll try a few variations on the benchmark later. -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Tue, Sep 5, 2017 at 6:38 AM, dbohdanwrote: > On Monday, September 4, 2017 at 7:11:14 PM UTC+3, Jay McCarthy wrote: > I would like to add you to the AUTHORS file > (https://gitlab.com/dbohdan/racket-vs-the-world/blob/master/AUTHORS — please > read). Would this attribution line be okay? > >> Jay McCarthy >> https://jeapostrophe.github.io/ Yes, that's good. > I've run the default benchmark with the new application, which I've dubbed > "racket-custom". (Actually, I had to make a tweak to the benchmark to > accommodate the number of requests it was fulfilling. It made ApacheBench > overstep its memory quota and get killed.) When started with the "places" or > the "many-places" command line argument on Linux, racket-custom quickly runs > out of file descriptors. It opens one per request and apparently doesn't > close them. The following results are for the other two modes. > > == >> grep 'Requests per second' results/* > results/custom-single.txt:Requests per second:8086.51 [#/sec] (mean) > results/custom-many.txt:Requests per second:7000.06 [#/sec] (mean) It is really surprising to me that the many version doesn't perform better, because I assumed that there would be IO delays on one connection and you wouldn't want to stall others while waiting to read/write it. Presumably this is a bit of an artifact of the benchmarking happening on localhost? These seem like pretty good results (x2 over the best before!) and I interpret them as telling us that the problem is not in Racket's IO system but in how the Web server adds overhead. -- -=[ Jay McCarthy http://jeapostrophe.github.io]=- -=[ Associate ProfessorPLT @ CS @ UMass Lowell ]=- -=[ Moses 1:33: And worlds without number have I created; ]=- -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
I think so too (in both places versions) On Tue, Sep 5, 2017 at 6:50 AM, Jon Zeppieriwrote: > On Tue, Sep 5, 2017 at 1:38 AM, dbohdan wrote: >> >> I've run the default benchmark with the new application, which I've dubbed >> "racket-custom". (Actually, I had to make a tweak to the benchmark to >> accommodate the number of requests it was fulfilling. It made ApacheBench >> overstep its memory quota and get killed.) When started with the "places" or >> the "many-places" command line argument on Linux, racket-custom quickly runs >> out of file descriptors. It opens one per request and apparently doesn't >> close them. > > In this code: > > (let loop () > (define-values (r w) (tcp-accept l)) > (place-channel-put jobs-ch-to (cons r w)) > (loop))) > > after sending the ports to the place and before looping, I think the > ports need to be abandoned: > > (tcp-abandon-port r) > (tcp-abandon-port w) > > - Jon > > -- > You received this message because you are subscribed to the Google Groups > "Racket Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to racket-users+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- -=[ Jay McCarthy http://jeapostrophe.github.io]=- -=[ Associate ProfessorPLT @ CS @ UMass Lowell ]=- -=[ Moses 1:33: And worlds without number have I created; ]=- -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Tue, Sep 5, 2017 at 1:38 AM, dbohdanwrote: > > I've run the default benchmark with the new application, which I've dubbed > "racket-custom". (Actually, I had to make a tweak to the benchmark to > accommodate the number of requests it was fulfilling. It made ApacheBench > overstep its memory quota and get killed.) When started with the "places" or > the "many-places" command line argument on Linux, racket-custom quickly runs > out of file descriptors. It opens one per request and apparently doesn't > close them. In this code: (let loop () (define-values (r w) (tcp-accept l)) (place-channel-put jobs-ch-to (cons r w)) (loop))) after sending the ports to the place and before looping, I think the ports need to be abandoned: (tcp-abandon-port r) (tcp-abandon-port w) - Jon -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Monday, September 4, 2017 at 7:11:14 PM UTC+3, Jay McCarthy wrote: > Thank you for working on this Danyil. You're welcome! > Would you please add this file to your tests (and each of its three > ways of running?) Added, and updated to the "many-places" version. I would like to add you to the AUTHORS file (https://gitlab.com/dbohdan/racket-vs-the-world/blob/master/AUTHORS — please read). Would this attribution line be okay? > Jay McCarthy> https://jeapostrophe.github.io/ I've run the default benchmark with the new application, which I've dubbed "racket-custom". (Actually, I had to make a tweak to the benchmark to accommodate the number of requests it was fulfilling. It made ApacheBench overstep its memory quota and get killed.) When started with the "places" or the "many-places" command line argument on Linux, racket-custom quickly runs out of file descriptors. It opens one per request and apparently doesn't close them. The following results are for the other two modes. == > grep 'Requests per second' results/* results/caddy.txt:Requests per second:3724.58 [#/sec] (mean) results/compojure.txt:Requests per second:3342.73 [#/sec] (mean) results/custom-single.txt:Requests per second:8086.51 [#/sec] (mean) results/custom-many.txt:Requests per second:7000.06 [#/sec] (mean) results/flask.txt:Requests per second:1113.81 [#/sec] (mean) results/guile.txt:Requests per second:2025.52 [#/sec] (mean) results/plug.txt:Requests per second:4367.07 [#/sec] (mean) results/scgi.txt:Requests per second:2243.83 [#/sec] (mean) results/sinatra.txt:Requests per second:324.91 [#/sec] (mean) results/stateful.txt:Requests per second:538.47 [#/sec] (mean) results/stateless.txt:Requests per second:657.18 [#/sec] (mean) == Long-form results with latency data are attached. -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout. grep -A 29 'Concurrency Level' results/* results/caddy.txt:Concurrency Level: 100 results/caddy.txt-Time taken for tests: 180.000 seconds results/caddy.txt-Complete requests: 670425 results/caddy.txt-Failed requests:0 results/caddy.txt-Total transferred: 2900258550 bytes results/caddy.txt-HTML transferred: 2752094625 bytes results/caddy.txt-Requests per second:3724.58 [#/sec] (mean) results/caddy.txt-Time per request: 26.849 [ms] (mean) results/caddy.txt-Time per request: 0.268 [ms] (mean, across all concurrent requests) results/caddy.txt-Transfer rate: 15734.91 [Kbytes/sec] received results/caddy.txt- results/caddy.txt-Connection Times (ms) results/caddy.txt- min mean[+/-sd] median max results/caddy.txt-Connect:00 0.2 0 16 results/caddy.txt-Processing: 0 27 4.0 26 96 results/caddy.txt-Waiting:0 26 3.7 26 81 results/caddy.txt-Total: 0 27 4.0 27 96 results/caddy.txt- results/caddy.txt-Percentage of the requests served within a certain time (ms) results/caddy.txt- 50% 27 results/caddy.txt- 66% 27 results/caddy.txt- 75% 28 results/caddy.txt- 80% 29 results/caddy.txt- 90% 30 results/caddy.txt- 95% 32 results/caddy.txt- 98% 34 results/caddy.txt- 99% 37 results/caddy.txt- 100% 96 (longest request) -- results/compojure.txt:Concurrency Level: 100 results/compojure.txt-Time taken for tests: 180.000 seconds results/compojure.txt-Complete requests: 601692 results/compojure.txt-Failed requests:0 results/compojure.txt-Total transferred: 2551174080 bytes results/compojure.txt-HTML transferred: 2469945660 bytes results/compojure.txt-Requests per second:3342.73 [#/sec] (mean) results/compojure.txt-Time per request: 29.916 [ms] (mean) results/compojure.txt-Time per request: 0.299 [ms] (mean, across all concurrent requests) results/compojure.txt-Transfer rate: 13841.00 [Kbytes/sec] received results/compojure.txt- results/compojure.txt-Connection Times (ms) results/compojure.txt- min mean[+/-sd] median max results/compojure.txt-Connect:01 35.6 01037 results/compojure.txt-Processing: 1 28 14.8 27 312 results/compojure.txt-Waiting:1 28 14.7 27 278 results/compojure.txt-Total: 1 30 38.6 271196 results/compojure.txt- results/compojure.txt-Percentage of the requests served within a certain time (ms) results/compojure.txt- 50% 27 results/compojure.txt- 66% 32 results/compojure.txt- 75% 36 results/compojure.txt- 80% 39 results/compojure.txt- 90% 45 results/compojure.txt- 95% 52 results/compojure.txt- 98% 64
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
I thought of another way, so here's a fourth version: #lang racket/base (require racket/tcp racket/match racket/place) (define message #"Lorem ipsum...") ;; XXX fill this in (define len (bytes-length message)) (define (serve! r w) (let read-loop () (define b (read-bytes-line r 'any)) (unless (or (eof-object? b) (bytes=? b #"")) (read-loop))) (close-input-port r) (write-bytes #"HTTP/1.1 200 OK\r\n" w) (write-bytes #"Connection: close\r\n" w) (write-string (format "Content-Length: ~a\r\n" len) w) (write-bytes #"\r\n" w) (write-bytes message w) (close-output-port w)) (define port-no 5000) (define (setup! k) (define l (tcp-listen port-no 500 #t "0.0.0.0")) (k l) (tcp-close l)) (define (single-at-a-time l) (let loop () (define-values (r w) (tcp-accept l)) (serve! r w) (loop))) (define (many-at-a-time l) (let loop () (define-values (r w) (tcp-accept l)) (thread (λ () (serve! r w))) (loop))) (define (k-places l) (define-values (jobs-ch-to jobs-ch-from) (place-channel)) (define ps (for/list ([i (in-range (processor-count))]) (place/context local-ch (let loop () (define r*w (place-channel-get jobs-ch-from)) (serve! (car r*w) (cdr r*w)) (loop) (let loop () (define-values (r w) (tcp-accept l)) (place-channel-put jobs-ch-to (cons r w)) (loop))) (define (many-places l) (define ps (for/list ([i (in-range (processor-count))]) (place/context jobs-ch-from (let loop () (define r*w (place-channel-get jobs-ch-from)) (thread (λ () (serve! (car r*w) (cdr r*w (loop) (let loop () (for ([send-to-p-ch (in-list ps)]) (define-values (r w) (tcp-accept l)) (place-channel-put send-to-p-ch (cons r w))) (loop))) (module+ main (setup! (match (current-command-line-arguments) [(vector "single") single-at-a-time] [(vector "many") many-at-a-time] [(vector "places") k-places] [(vector "many-places") many-places]))) On Mon, Sep 4, 2017 at 5:11 PM, Jay McCarthywrote: > Thank you for working on this Danyil. I think it is fair to test what > the defaults give you. > > Would you please add this file to your tests (and each of its three > ways of running?) It would be interesting to compare the performance > of Racket versus the particular Web server library. (The Web server > sets up a lot of safety state per connection to ensure that each > individual connection doesn't run out of memory or crash anything. I > am curious what the cumulative effect of all those features and > protections are.) > > Jay > > #lang racket/base > (require racket/tcp > racket/match) > > (define message #"Lorem ipsum...") ;; XXX fill this in > (define len (bytes-length message)) > > (define (serve! r w) > (let read-loop () > (define b (read-bytes-line r 'any)) > (unless (or (eof-object? b) (bytes=? b #"")) > (read-loop))) > (close-input-port r) > (write-bytes #"HTTP/1.1 200 OK\r\n" w) > (write-bytes #"Connection: close\r\n" w) > (write-string (format "Content-Length: ~a\r\n" len) w) > (write-bytes #"\r\n" w) > (write-bytes message w) > (close-output-port w)) > > (define port-no 5000) > (define (setup! k) > (define l (tcp-listen port-no 500 #t "0.0.0.0")) > (k l) > (tcp-close l)) > > (define (single-at-a-time l) > (let loop () > (define-values (r w) (tcp-accept l)) > (serve! r w) > (loop))) > > (define (many-at-a-time l) > (let loop () > (define-values (r w) (tcp-accept l)) > (thread (λ () (serve! r w))) > (loop))) > > (define (k-places l) > (local-require racket/place) > > (define-values (jobs-ch-to jobs-ch-from) (place-channel)) > (define ps > (for/list ([i (in-range (processor-count))]) > (place/context >local-ch >(let loop () > (define r*w (place-channel-get jobs-ch-from)) > (serve! (car r*w) (cdr r*w)) > (loop) > > (let loop () > (define-values (r w) (tcp-accept l)) > (place-channel-put jobs-ch-to (cons r w)) > (loop))) > > (module+ main > (setup! >(match (current-command-line-arguments) > [(vector "single") > single-at-a-time] > [(vector "many") > many-at-a-time] > [(vector "places") > k-places]))) > > -- > -=[ Jay McCarthy http://jeapostrophe.github.io]=- > -=[ Associate ProfessorPLT @ CS @ UMass Lowell ]=- > -=[ Moses 1:33: And worlds without number have I created; ]=- -- -=[ Jay McCarthy http://jeapostrophe.github.io]=- -=[ Associate ProfessorPLT @ CS @ UMass Lowell ]=- -=[ Moses 1:33: And worlds without number have I created; ]=- -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
Thank you for working on this Danyil. I think it is fair to test what the defaults give you. Would you please add this file to your tests (and each of its three ways of running?) It would be interesting to compare the performance of Racket versus the particular Web server library. (The Web server sets up a lot of safety state per connection to ensure that each individual connection doesn't run out of memory or crash anything. I am curious what the cumulative effect of all those features and protections are.) Jay #lang racket/base (require racket/tcp racket/match) (define message #"Lorem ipsum...") ;; XXX fill this in (define len (bytes-length message)) (define (serve! r w) (let read-loop () (define b (read-bytes-line r 'any)) (unless (or (eof-object? b) (bytes=? b #"")) (read-loop))) (close-input-port r) (write-bytes #"HTTP/1.1 200 OK\r\n" w) (write-bytes #"Connection: close\r\n" w) (write-string (format "Content-Length: ~a\r\n" len) w) (write-bytes #"\r\n" w) (write-bytes message w) (close-output-port w)) (define port-no 5000) (define (setup! k) (define l (tcp-listen port-no 500 #t "0.0.0.0")) (k l) (tcp-close l)) (define (single-at-a-time l) (let loop () (define-values (r w) (tcp-accept l)) (serve! r w) (loop))) (define (many-at-a-time l) (let loop () (define-values (r w) (tcp-accept l)) (thread (λ () (serve! r w))) (loop))) (define (k-places l) (local-require racket/place) (define-values (jobs-ch-to jobs-ch-from) (place-channel)) (define ps (for/list ([i (in-range (processor-count))]) (place/context local-ch (let loop () (define r*w (place-channel-get jobs-ch-from)) (serve! (car r*w) (cdr r*w)) (loop) (let loop () (define-values (r w) (tcp-accept l)) (place-channel-put jobs-ch-to (cons r w)) (loop))) (module+ main (setup! (match (current-command-line-arguments) [(vector "single") single-at-a-time] [(vector "many") many-at-a-time] [(vector "places") k-places]))) -- -=[ Jay McCarthy http://jeapostrophe.github.io]=- -=[ Associate ProfessorPLT @ CS @ UMass Lowell ]=- -=[ Moses 1:33: And worlds without number have I created; ]=- -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On Saturday, September 2, 2017 at 8:46:54 PM UTC+3, Piyush Katariya wrote: > Does Racket app make use of all CPU cores by having multiple processes ? Thanks for asking this question. It prompted me to revise how the benchmark is run. The short answer is that the servlet application uses a single core. The SCGI application is the same way, but benefits from nginx's built-in support for multicore through worker processes. I've made the servlet application use futures according to Jay McCarthy's post at https://lists.racket-lang.org/users/archive/2014-July/063419.html, but found that, as he predicted, it did not improve the performance (in fact, it reduced it). I don't know straight away how to implement places-based workers for servlets. I may investigate it later (I'm interested in message-passing parallelism), but my primary intention with this project is to measure the performance a developer gets out of existing, reusable, hopefully already debugged libraries and frameworks. A little custom code and configuration is fine, but a custom work scheduler seems to me to go beyond that. Does a library exist for running servlets in places? I've also experimented with having nginx load balance between two Racket SCGI instances. The result was somewhat better throughput (~2650 req/s instead of ~2300 req/s) and identical latency when the application had two cores to work with, and worse throughput (~1800 req/s) and latency with only one. As far as fairness goes, I don't think either a strictly single-core or a use-them-if-you-can multi-core benchmark is clearly unfair. Both types have value. After some consideration, I've decided to commit to single-core (sort of — read on) as the default for this benchmark. My first solution was to limit the VM in which I ran the benchmarks to a single core, but that lead to ApacheBench and the application competing for CPU time. This would take the benchmark further away from the real world, and it is generally not recommended to have the benchmarked application and the load generator share a CPU. I've tried a few solutions, and the best I have found in terms of how the resources are allocated is to bind each of the two containers (the application and ApacheBench) to a separate CPU core. That way the applications get only one core, but don't have to fight for it with ApacheBench. I've pushed this update to the repository. Here are some numbers for the three configurations: one core, two cores, and one core per container. -- 1 shared core results/caddy.txt -Requests per second:2312.93 [#/sec] (mean) results/compojure.txt -Requests per second:1677.89 [#/sec] (mean) results/flask.txt -Requests per second: 977.33 [#/sec] (mean) results/guile.txt -Requests per second:1508.77 [#/sec] (mean) results/plug.txt -Requests per second:2335.21 [#/sec] (mean) results/scgi.txt -Requests per second:2163.00 [#/sec] (mean) results/sinatra.txt -Requests per second: 317.75 [#/sec] (mean) results/stateful.txt -Requests per second: 494.55 [#/sec] (mean) results/stateless.txt -Requests per second: 584.34 [#/sec] (mean) -- 2 shared cores results/caddy.txt -Requests per second:4358.68 [#/sec] (mean) results/compojure.txt -Requests per second:4730.50 [#/sec] (mean) results/flask.txt -Requests per second:1140.01 [#/sec] (mean) results/guile.txt -Requests per second:2092.78 [#/sec] (mean) results/plug.txt -Requests per second:5235.78 [#/sec] (mean) results/scgi.txt -Requests per second:3074.15 [#/sec] (mean) results/sinatra.txt -Requests per second: 329.35 [#/sec] (mean) results/stateful.txt -Requests per second: 604.30 [#/sec] (mean) results/stateless.txt -Requests per second: 687.77 [#/sec] (mean) -- 2 fixed cores (one for "benchmarked", one for "ab") results/caddy.txt -Requests per second:3963.03 [#/sec] (mean) results/compojure.txt -Requests per second:2513.05 [#/sec] (mean) results/flask.txt -Requests per second:1207.77 [#/sec] (mean) results/guile.txt -Requests per second:2133.48 [#/sec] (mean) results/plug.txt -Requests per second:4322.55 [#/sec] (mean) results/scgi.txt -Requests per second:2406.02 [#/sec] (mean) results/sinatra.txt -Requests per second: 347.89 [#/sec] (mean) results/stateful.txt -Requests per second: 573.48 [#/sec] (mean) results/stateless.txt -Requests per second: 658.67 [#/sec] (mean) -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
Thanks for the clarification. On 03-Sep-2017 12:31 AM, "George Neuner"wrote: > > On 9/2/2017 1:46 PM, Piyush Katariya wrote: > >> Does Racket app make use of all CPU cores by having multiple processes ? >> > > If it is written to use "places", which are parallel instances of the > Racket VM that run on separate kernel threads. > https://docs.racket-lang.org/guide/parallelism.html?q=place# > %28tech._place%29 > > What Racket calls "threads" are "green" [user space] multiplexed on a > single kernel thread. > https://en.wikipedia.org/wiki/Green_threads > > George > -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
On 9/2/2017 1:46 PM, Piyush Katariya wrote: Does Racket app make use of all CPU cores by having multiple processes ? If it is written to use "places", which are parallel instances of the Racket VM that run on separate kernel threads. https://docs.racket-lang.org/guide/parallelism.html?q=place#%28tech._place%29 What Racket calls "threads" are "green" [user space] multiplexed on a single kernel thread. https://en.wikipedia.org/wiki/Green_threads George -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
Then it might not be a fair benchmark with a comparison to other Platforms. Isnt it ? -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [racket-users] Re: Racket Web servlet performance benchmarked and compared
The Racket web server does not make use of multiple CPU cores, but with stateless continuations you can run multiple instances behind a reverse proxy. See https://groups.google.com/d/topic/racket-users/TC4JJnZo1U8/discussion ("it is exactly node.js without callbacks"). -Philip On Sat, Sep 2, 2017 at 12:46 PM, Piyush Katariyawrote: > Just curious ... > > Does Racket app make use of all CPU cores by having multiple processes ? > > In go app, there isnt any need to becoz golang runtime uses all CPU > avialble by default. So is the case with JVM and Erlang VM > > -- > You received this message because you are subscribed to the Google Groups > "Racket Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to racket-users+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[racket-users] Re: Racket Web servlet performance benchmarked and compared
Just curious ... Does Racket app make use of all CPU cores by having multiple processes ? In go app, there isnt any need to becoz golang runtime uses all CPU avialble by default. So is the case with JVM and Erlang VM -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.