The buffer is only per one TCP connection, how many connection is going to stream the messages? On Sep 25, 2015 2:06 AM, "Auer, Jens" <[email protected]> wrote:
> > Increasing the buffer will not change the number of allocations, only > the number of recv API calls, I think this is the > > performance gain you see. The buffer size is per one TCP connection, how > much messages do you expect to get on one TCP > > connection? I don't believe you will see performance improvement on real > world scenario ( multiple clients on remote > > computers). Increasing the buffer can cause some problems like > starvation and High CPU (for copy operation). > > I have a real-world scenario at hand, namely the application I am > developing. It is a distributed application where > processes exchange of 1125 bytes, at high frequencies up to 30,000 msg/s. > The computers are connected via a 10GB network. For testing, I manually > increased the buffers, and this reduced the CPU load significantly. > > Of course this is because the number of system calls is reduced, but this > is the whole point of using the buffers anyway. > If you would not care about reducing system calls, you could just use > three reads to read a single message directly, without > using an additional buffer. Just read the first byte, then the size, > allocate the message and read the data. For sending, you would not need to > buffer anything, just send whatever is available. This would make the make > the code much simpler. Instead, zeroMQ chose to use buffers for both to > reduce system calls. What I am proposing is to make this adaptable to the > application, because there is no single optimal buffer size for different > applications. It is a trade-off only the application developer can make. > > I don’t think the number of allocations can be reduced more, at least for > message reception. In the 4.2 branch, > it does a single allocation for the reception buffer (8k), and uses this > as the memory for zero-copy messages when > receiving large messages. So receiving is done with a single allocation > for large messages. When sending, the data is copied into a static buffer > and then sent. It may be possible to eliminate the copy operation by using > vector I/O, and I started > looking into that, but didn’t have time to work on it lately. > > Cheers, > Jens > > > -- > Dr. Jens Auer | CGI | Software Engineer > CGI Deutschland Ltd. & Co. KG > Rheinstraße 95 | 64295 Darmstadt | Germany > T: +49 6151 36860 154 > [email protected] > Unsere Pflichtangaben gemäß § 35a GmbHG / §§ 161, 125a HGB finden Sie > unter de.cgi.com/pflichtangaben. > > CONFIDENTIALITY NOTICE: Proprietary/Confidential information belonging to > CGI Group Inc. and its affiliates may be contained in this message. If you > are not a recipient indicated or intended in this message (or responsible > for delivery of this message to such person), or you think for any reason > that this message may have been addressed to you in error, you may not use > or copy or deliver this message to anyone else. In such case, you should > destroy this message and are asked to notify the sender by reply e-mail. > > > _______________________________________________ > zeromq-dev mailing list > [email protected] > http://lists.zeromq.org/mailman/listinfo/zeromq-dev >
_______________________________________________ zeromq-dev mailing list [email protected] http://lists.zeromq.org/mailman/listinfo/zeromq-dev
