This is mere speculation, since its been a while since I last took a
look at how CURVE is implemented, but I think the proxy (i.e. your
broker) might be causing issues.
When you send a message from PUSH to PROXY, you use a single CURVE
session. But when the proxy effectively sends an application-level
multicast to many subscribers, it uses as many CURVE sessions as there
are subscribers.
But as Luca suggested, try profiling and use libsodium. In my project (I
was using a similar topology as you) it was about 10-15 times faster
than TweetNaCl.
On 05. 01. 2018 14:45, Luca Boccassi wrote:
On Fri, 2018-01-05 at 12:46 +0000, Stephen Gray wrote:
I'm building up a 'clone' pattern distributed app for transmission of
time-series data using CZMQ and with the option to either enable or
disable CURVE security ala IRONHOUSE.
It has a PUSH->{PROXY: PULL->PUB}->SUB arrangement for delivery of
latest updates and ROUTER->DEALER for data history requests and
responses.
It's functioning nicely in initial testing, both with CURVE enabled
or disabled.
However when I increase the number of data points (from a few
hundred) to one million it starts creating an issue.
With CURVE disabled the million data points (requested DEALER->ROUTER
as 1000 messages x 1000 data points each) are requested, delivered
and synchronised in the blink of an eye.
When CURVE is enabled then the client just gets DISCONNECTED whenever
it tries to connect & make the 1000 x 1000 message requests. The
1000 requests are fast and sequential. I tried changing to 100
messages x 10000 datapoints; but this made no difference.
Are there some messaging limits in the CURVE protocol. Anyone know
why I might get this behaviour?
With thanks,
Stephen.
P.S. Code for this is long & involved, too much to expect anyone to
read. ;-)
Have you tried profiling to see where the bottleneck is?
If it's in the crypto primitives, check whether you are using libsodium
or the embedded tweetnacl. IIRC libsodium supports hardware
accelerators, including recent-ish CPU instructions.
I don't think I've seen benchmarks before.
_______________________________________________
zeromq-dev mailing list
[email protected]
https://lists.zeromq.org/mailman/listinfo/zeromq-dev
_______________________________________________
zeromq-dev mailing list
[email protected]
https://lists.zeromq.org/mailman/listinfo/zeromq-dev