Le 22/09/2020 à 08:26, Reinhard Vicinus a écrit :
Reproduction is pretty simple:

use the example configuration from haproxy/contrib/spoa_server
start up a haproxy version with default nbthread value greater then 1 like this: "haproxy -f spoa-server.conf -d -Ws"
start the example spoa script: "./spoa -f ps_python.py -d"
run some curl requests: "curl localhost:10001/"
the variable sess.iprep.ip_score is only set on the first and n-th request


It is an issue with the spoa server. By default, 5 workers are started except in debug mode. In this case, there is only one worker. And I discovered that the spoa server has a design flaw. A worker is only able to handle one connection at a time. I don't know if this is expected or not. But it is a problem if you have more threads on HAProxy side than workers on spoa server. Because, in HAProxy, the spoe applets are sticky on threads. This means you will have at least one applet per thread, each one owning a connection to the spoa server. Thus, with only one worker, only one applet on one thread will be able to send messages to the spoa server. All others will be blocked on the connection establishment, leading to a timeout.

Worst, the spoa server does not support the pipelining or the async mode. So a worker is only able to process one message at a time, synchronously. On HAProxy side, this means an applet will be busy during the message processing. So new applets will be created to process other messages. And even in pipelining/async mode, more applets may be created to handle the load. Thus, it is really hard to use it on a production environment.

But to solve your issue, don't start the spoa server in debug mode and set the right number of worker (>= nbthread) with the -n argument. This should work with a very lightweight load.

--
Christopher Faulet

Reply via email to