I concur with Steve-o's comments.  The current NORM transport option in the git 
repository has some "middle-of-the-road" parameter settings with regards to 
performance.  It does do TCP-friendly congestion control.  It is set for a 
modest level of the FEC erasure coding it uses for a highly efficient ARQ to 
potentially large group sizes.  The FEC encoding/decoding requires some CPU 
usage and could be a bottleneck for very high data rates, but the FEC 
parameters could be reduced or zero'd for modest groups sizes (e.g. 10's of 
nodes) if slightly less ARQ efficiency is OK and/or low packet loss is 
expected.  The other aspect would be to increase the NORM buffer sizes to allow 
for higher rates (and the NORM API allows the underlying UDP socket buffers to 
be set to larger sizes too for  further tuning).  Those parameters aren't 
exposed in the ZeroMQ APIs and NORM (and me) are somewhat newbies here so 
suggestions in those regards are welcome!

On the congestion control, the NORM TCP-friendly mode is what is set up in the 
ZeroMQ "norm_engine" but there are some added options that could be invoked 
there.  There is fixed rate that can be set, and there is also the ability to 
set the lower and upper bounds on the rate adjustment done per the automated 
congestion control.  In a controlled environment (e.g. a 10GigE LAN), the rate 
bounds could applied to help "jump start" the usual slow start and provide for 
a little more managed behavior.  

Finally, for the current ZMQ_PUB/ZMQ_SUB "norm_engine" support, there is no 
explicit flow control implemented since the subscriber (receiver) set is 
unknown to the sender, but the NORM API can be used to invoked optional 
ACK-based flow control when the receiver set is known.  I have actually code 
for that into the fork of libzmq I have at 
https://github.com/bebopagogo/libzmq, although a mechanism for providing the 
publisher (NORM sender) with the list of subscribers (NORM receivers) to 
populate the sender "acking node list" isn't implemented yet (This will also be 
used for unicast purposes as the code evolves.)  Flow control is a _key_ aspect 
of protocol operation, particularly when pushing the performance envelope in 
one way or another (e.g. bandwidth*delay,packet loss, etc).

best regards,

Brian


On Apr 18, 2014, at 12:29 PM, Steven McCoy <[email protected]> wrote:

> On 18 April 2014 10:25, Michel Pelletier <[email protected]> wrote:
> I'm afraid I can't help you with you specific pgm problem, but if you don't 
> mind me playing devil's advocate for a second, it seems like you're doing a 
> lot of engineering work to distribute a file to 20 servers.  Have you 
> considered using an existing multicast tool like uftp or udpcast to 
> distribute the file?
> 
> 
> Yes, I see this a lot.  Yes multicast is ideal for fast file distribution but 
> congestion control and reliability are not a given.  One of the first PGM 
> implementations was created and predominantly used for wide file 
> distribution, but they conveniently gave a file transfer protocol above that 
> and everything was designed for satellite style high bandwidth, high latency, 
> low packet rates.  These days peer-to-peer distribution above TCP overlay 
> networks is significantly cheaper to deploy and only costs additional latency 
> through multi-hop traversal.
> 
> OpenPGM was created to be flexible but only applied to high packet rate, low 
> latency applications, and ZeroMQ has incorporated this model.  There is a 
> congestion control protocol taken from an earlier SmartPGM implementation but 
> it has not aged well at all and so it is disabled by default and not 
> accessible at all through the ZeroMQ interface.  NORM would be a better 
> choice of protocol here, if only because of it being stable and proven with 
> additional features, but at the cost of some level of performance.  This is 
> the challenge, no stable scalable high performance congestion protocol has 
> been invented yet suitable for 10GigE+ multicast.
> 
> The new link for UFTP is here:
> 
> http://uftp-multicast.sourceforge.net/
> 
> -- 
> Steve-o
> _______________________________________________
> zeromq-dev mailing list
> [email protected]
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev

_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to