Re: [zeromq-dev] cppzmq "tour"

2020-04-15 Thread Arnaud Loonstra

On 16-04-2020 02:05, Brett Viren wrote:

Hi,

I wrote up a "tour" of cppzmq in order to drive me to learn more about
it.

   https://brettviren.github.io/cppzmq-tour/

Maybe it's of use or interest to others.

Cheers,
-Brett.



Nice, documentation like that is very welcomed!

Rg,

Arnaud
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] cppzmq "tour"

2020-04-15 Thread Brett Viren
Hi,

I wrote up a "tour" of cppzmq in order to drive me to learn more about
it.

  https://brettviren.github.io/cppzmq-tour/

Maybe it's of use or interest to others.  

Cheers,
-Brett.


signature.asc
Description: PGP signature
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Minimizing ZMQ latency for large messages

2020-04-15 Thread Brett Viren
Hi Seyed,

Seyed Hossein Mortazavi  writes:

> I'm running the server and the client over a network with latency (160ms
> round trip). I create the latency using tc on both the client and the
> server:
>
> tc qdisc  del dev eth0 root
> tc class  add dev eth0 parent 1: classid 1:155 htb rate 1000mbit
> tc filter add dev eth0 parent 1: protocol ip prio 1 u32 flowid 1:155 match ip 
> dst 192.168.181.1/24
> tc qdisc  add dev eth0 parent 1:155 handle 155: netem delay $t1 $dt1 
> distribution normal

Is "$t1" set to "160" then?  I've not used "tc" before, does the "netem
delay" apply to both sending and receiving and also do you have this set
on both endpoints?  Ie, for a given value of "$t1" and infinite
bandwidth, what overall latency is expected?

Maybe remove JITTER from "tc" for some tests just to make the numbers
easier to interpret?

Again, my ignorance but do subsequent "netem delay" commands add to or
replace a previous delay setting?

> Now when I run java -jar client.jar 192.168.181.3  10 I get the
> following output:
>
> Sending Hello 1
> Received world 1 1103.392783

This first one is anomalously high, I think due to some socket
initialization that occurs only on the first message.  There is a long
discussion in an Issue in libzmq's GitHub from a few months back that
touches on this aspect.

> Sending Hello 2
> Received world 2 322.553512

If I understand your code correctly, this loop sent and received
2*100kB.  If saturating Gbps this should only require 3ms.  So,
definitely something else is responsible than the network.

> Sending Hello 3
> Received world 3 478.10143
> Sending Hello 4
> Received world 4 606.396567
> Sending Hello 5
> Received world 5 641.465041
> Sending Hello 6
> Received world 6 772.961712
> Sending Hello 7
> Received world 7 910.848674
> Sending Hello 8
> Received world 8 966.694224
> Sending Hello 9
> Received world 9 940.645636
>
> which means that as we increase the size of the message, it takes more
> round trips to send the message and receive the ack (you can play with
> the message size to see for yourself).

Just to be clear, each loop send() and recv() passes exactly 1 message
so the number of round trips is constant as the message size progresses.

> I was wondering what I need to do
> to prevent that from happening, that is: send everything in one go and
> minimize the latency to the roundtrip time. 

Since it is an unknown quantity to me, I'm biased to first suspect Java.
Though, it's hard for me to believe that even Java would add so much
time just to allocate marshal this small chunk of memory.

You might run the performance testers provided by libzmq on your "tc"
shaped network to see if you can exclude Java as the source of the
problem.  Once built they can be found in

  libzmq/perf/*_{lat,thr}

And, if you haven't seen it already the wiki has some results of studies
using them.  They were useful when I looked at ZeroMQ performance.

  http://wiki.zeromq.org/area:results

> Note: In my original application, I'm using a REQ-ROUTER pattern as I have
> multiple clients, but the issue with the latency and large messages lingers
> on

If your application really doesn't need synchronous query/response then
using PUSH/PULL or DEALER/ROUTER could allow protocols that do not
suffer as much round-trip related latency.

But, in your position, I'd first understand the source of this latency.

-Brett.




signature.asc
Description: PGP signature
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev