This makes proxying CURVE quite useless, as I can instead increase the
number of I/O threads.
Besides, in CURVE, I have introduced some delays between the different
steps of the handcheck (part of the PARANO specifications). So here, I
have a problem since I cannot serve hundreds of simultaneous connexions
without introducing some huge latencies. Say I have one I/O thread, 100
simultaneous clients, and a 100 ms handcheck, the last client sees a 10
sec latency since the delays are queued. So here I should attribute say
20 or 50 threads to zmq I/O and a few threads only for the workers, as I
can use several sockets per worker.
Or I can still proxy CURVE and use different contexts in the workers, so
that they get each their own I/O thread(s). Is it correct ? This
solution would not reduce the number of threads: 3 threads per worker (2
for I/O), 10 sockets per worker (max latency is 500 ms), 10 workers, 2
for the proxy = 32 threads.
Another idea which use CURVE proxying is to manage the delays I have
introduced, not in CURVE itself, but in the proxy. Then, the delays are
not seen by the I/O threads, and I can get the whole application running
with a very minimal number of threads: one for the I/O, one for the
proxy (not blocking on the delays), and say 6 for the workers (say 100
sockets each).
Is there some obvious strategy in terms of architecture ?
Le 14/02/2014 10:31, Pieter Hintjens a écrit :
When you're doing encryption you will be hitting CPU limits. The work
will happen in the background I/O thread, not your application thread.
So start as many I/O threads as you have CPU cores. How you match
sockets to application threads is going to be insignificant.
On Thu, Feb 13, 2014 at 6:08 PM, Laurent Alebarde l.aleba...@free.fr wrote:
Hi all,
In a server, I want to assign one socket per client (CURVE). What is the
best in terms of performance ressources ? Say I want to deal with 1,000
simultaneous clients.
one socket per thread with an avarage cpu load 1%, and 1,000 threads ?
100 sockets per thread with an avarage cpu load near 100%, and 10 threads ?
anything between ?
Some service delivery latency is acceptable.
Laurent
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev