Hi,
I am using (among others) push and pull sockets. Recently I noticed
that on high loads (=pushing a lot of messages) messages were lost. The
documentation for push socket says
(http://api.zeromq.org/4-0:zmq-socket#toc14) that on reaching high water
mark the socket goes to mute
This does seem like incorrect behavior. Has anyone else experienced the
same problem?
On Mon, May 11, 2015 at 8:39 AM, Antti Karanta antti.kara...@neromsoft.com
wrote:
Hi,
I am using (among others) push and pull sockets. Recently I noticed that
on high loads (=pushing a lot of
On 2015-05-11 15:53, Sergey Zinov wrote:
Hi all,
I encountered problem, while using zeromq with epgm transport.
I have several nodes that should broadcast messages to each other,
for that purpose each node creates two sockets(one PUB and one SUB)
with same url. See attachment for sample
Hi all,
I encountered problem, while using zeromq with epgm transport.
I have several nodes that should broadcast messages to each other, for
that purpose each node creates two sockets(one PUB and one SUB) with
same url. See attachment for sample code(python).
First I tested it on amd64
Hi Arnaud,
Thanks for the answer. I know that I always should set subscriber option
on sub socket, and I always do that(see my sample code). Problem is that
it seems that strange behavior appears, depending on at which place I do
this subscription. And I don't think that subscription option
If you can show that it's faster to use the default functions, please make
a pull request, we'll merge it.
On Mon, May 11, 2015 at 10:56 PM, Auer, Jens jens.a...@cgi.com wrote:
Hi,
I've looking at the zeroMQ source code a little bit and was surprised that
wire.hpp implements custom
Hi,
I've looking at the zeroMQ source code a little bit and was surprised that
wire.hpp implements custom endianess converison function to convert 16, 32 and
64 bit values from/to network byte-order. Is there any reason for not using
available functions like hton* or htobe family on Linux? I
I just created a list filled by 2 million zero and sended it out. Any problems?
On Sun, Apr 26, 2015 at 8:27 PM, KIU Shueng Chuan nixch...@gmail.com wrote:
I read your code as sending a multipart message composed of 2 million parts
each of size 1 byte. Is that right?
On Apr 26, 2015 16:50, Li
Each element of the list is sent as a zeromq message frame. Each (short)
frame has a header overhead of 2 bytes. track=True also causes more
overhead per frame.
['0' for x in range(N)] is sent as N message frames.
['0'*N] is sent as 1 message frame. (Did you want to do this instead?)
On Tue,
(but it is on the criticl path for large messages)
If you were using this approach to encode fields in some general
serialization format, maybe it would make a measurable difference, but it
seems unlikely to do so relative to framing ZeroMQ messages. In the worst
case the put|get_uint64
10 matches
Mail list logo