On 12. 12. 25 20:49, Branko Čibej wrote:
On 12. 12. 25 20:02, Daniel Sahlberg wrote:
Den tors 11 dec. 2025 kl 19:43 skrev Branko Čibej<[email protected]>:
On 11. 12. 25 19:59, Daniel Sahlberg wrote:
* Issue SERF-195, which is a substantial rewrite of the request queues.
The
code is in a separate branch SERF-195. The code looks good to me but I
haven't analyzed in detail. It should be possible to merge to trunk.
The patch works, in the sense that it causes no regressions, but I
haven't been able to reproduce the reported issue. It's also makes the
code slightly more readable, but I couldn't find the mentioned race
condition during review, it looks like just a refactoring of the
original code.

I think we need another pair of eyes to do a careful review of the
branch changes.

I'll try to reproduce the initial issue and see if I can figure out what
goes wrong. Do you have any ideas where I should start? I assume a
multithreaded client (with multiple connections) would be needed?

Multi-threading on the client has no effect here. Request queues are per-context and contexts require single-threaded access.

The original issue submission just adds a sleep() in serf__setup_request_basic_auth(), so I guess you don't need a multi-threaded client, you just need a server that always requires authentication. That's why, e.g., linking Subversion with that and doing read from the Apache repos won't work. You could try to set up a local multi-threaded http server that requires basic auth and serves some data  -- easy to do in Python -- then write a Serf-based client that sends requests in a loop. Serf-get would be a good starting point.

I think the important points here are:

 * all requests to the server must require authentication;
 * the server must be able to process requests in parallel and out of
   order: this is the only way I can think of where processed requests
   could be removed from the middle of the queue, not the beginning or
   the end;
 * the server must be "slow" enough that the client request queue fills
   up, so it needs both an upper limit to the number of parallel
   requests and a fake latency for the responses.


I'll give the server script a go.

See r1930478.

Try something like this:

   $ authserver.py 2>/dev/null &
   $ serf_get -n 400 -c 10 -x 20 -U user -P pass -m HEADhttp://127.0.0.1:8087/


However, this still doesn't reproduce the reported issue. One possible reason is that serf_get does *not* pipeline the requests in each connection but only sends the next request when the previous response has arrived. I haven't been able to track this down, don't know if it's a problem with serf_get or authserver.py.


-- Brane

Reply via email to