George Neuner writes:

> What I did say is that Python's threads are core limited - and *that*
> is true.   As a technical matter, Python *may* in fact start threads
> on different cores, but the continual need to take the GIL quickly
> forces every running thread in the process onto the same core.

I was pointing out that the GIL is irrelevant in this case.  All the
Python implementations in this benchmark use either green threads by
monkeypatching the standard threading and IO modules (gevent, meinheld)
or coroutines (asyncio) for concurrency and they fork subprocesses for

> That one actually is expected:  Racket's JSON (de)serializer is
> relatively slow.

It's the same as the plaintext test, except JSON is written to the
client instead of plain text:

> What wasn't expected was Sam's results from the "plain text" test
> which also showed Racket much slower than Python.  That does hint at a
> lot of overhead in the Racket framework.

Here's single-core Python vs Racket in the plaintext benchmark on my

It is surprising that Racket does worse on this benchmark that it does
on the JSON one, despite the fact that `response/json' uses
`response/output' under the hood.

I see these errors from Racket when I run the plaintext benchmark, but
they don't occur in any of the others:

    racket: tcp-addresses: could not get peer address
    racket:   system error: Transport endpoint is not connected; errno=107
    racket:   context...:
    racket:    .../more-scheme.rkt:261:28

I'll try to figure out what's causing these.

> To my knowledge, continuations will not be a factor unless either 1)
> the application is written in the #web-server language (which converts
> everything to CPS), or 2) the code invokes one of the send/suspend/*
> functions.

Whether you use the web interaction functions or not, servlets have to
do some bookkeeping (create new "instances", insert continuation
prompts) to support continuations:


Bypassing all of this is what I considered cheating, because most people
probably won't.  At the same time, though, I haven't measured what the
overhead of all this stuff is.  It could be minimal.

> My understanding is that the port passed to  response/output  is the
> actual socket ... so you can front-end it and write directly.  But
> that might be "cheating" under your definition.

That's what I did in the benchmark:

Only dispatchers get direct access to the output port for the socket.
If you use `dispatch/servlet', then it takes care of taking your
`response' value and calling `output-response' on it.  Unless the server
knows the connection should be closed and unless the request was a HEAD
request, then it outputs the response by chunking it:

You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
To view this discussion on the web visit

Reply via email to