[zeromq-dev] zmq_msg_send blocks reading from zmq::mailbox_t

2015-07-01 Thread Marcin Romaszewicz
Hi Guys,

We had our prod systems running on 3.2.4, and to work around some file
descriptor leaks, I'm testing 4.1.2 before back-porting jbreams' heartbeats
patch, and testing unmodified 4.1.2 is failing for me in production.

I've got ZMQ_ROUTER sockets wedged in zmq_msg_send, spinning in a while
(True) loop calling recv() on mailbox in zmq::socket_base_t::send().

Before I start chasing the cause of this, is there a known mistake on my
part which could cause this?

The socket is a ZMQ_ROUTER, with all settings at default except for
ZMQ_LINGER=0 and ZMQ_ROUTER_MANDATORY=1

I'm pretty sure I'm sending stuff to a dead peer, but not 100% certain yet.

Any tips on what I should be looking for to diagnose this? The same code,
unmodified, doesn't have this issue against zmq 3.2.4.

Thanks,
-- Marcin
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] socket event monitoring problem when using an event loop

2015-07-01 Thread Chris Laws
libzmq raises an error if you try anything other than an inproc://
transport. The socket event monitor emitter socket is created within within
libzmq.

Developers can specify the address string (the part after the inproc://
scheme) but not the transport.
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Build pyzmq on Windows with VC11 and Python 2.7

2015-07-01 Thread MinRK
Can you build other Python extensions? Is PyZMQ the only one that fails?

On Wed, Jul 1, 2015 at 2:51 PM, Christoph Buelter 
wrote:

> Hi,
>
> has anyone yet managed to build Windows with VC11 and Python 2.7.3?
> I am having all kinds of problems. The default Python 2.7.3 has been
> compiled with VC9, but I am using a custom version that was created with
> VC11 so I need pyzmq for that version.
> Anyone ever done it?
>
> Cheers
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Build pyzmq on Windows with VC11 and Python 2.7

2015-07-01 Thread Christoph Buelter
Hi,

has anyone yet managed to build Windows with VC11 and Python 2.7.3?
I am having all kinds of problems. The default Python 2.7.3 has been 
compiled with VC9, but I am using a custom version that was created with 
VC11 so I need pyzmq for that version.
Anyone ever done it?

Cheers
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] socket event monitoring problem when using an event loop

2015-07-01 Thread Justin Karneges
Possibly related: https://github.com/zeromq/libzmq/issues/1434

You might see if you have better luck using async i/o with ipc instead
of inproc.

On Wed, Jul 1, 2015, at 07:47 AM, Chris Laws wrote:
> I was recently working on getting the socket event monitor working in
> an async focused Python library, aiozmq, and I had great difficulty
> getting it working reliably. Hopefully this post might save someone
> some time if they are trying something similar in the future.
>
> tl;dr when using async programming model (e.g. event loop) connect
> before bind when using using the socket monitor PAIR sockets.
>
> The library I was working with uses pyzmq for lots of its internals.
> It is worth noting that I had no problem using the socket monitor from
> pyzmq along with the typical blocking style approach where I would run
> the socket event monitor poller in a separate thread.
>
> However when I tried to use the socket monitor along with the
> asynchronous model encouraged by the Python (3.4) asyncio module I
> found that the delivery of socket events was very unreliable - often
> the events would only come through when I was shutting down the
> monitor transport (the socket) at the end of a test.
>
> I started out using pyzmq's get_monitor_socket function which would
> enable the socket monitor using the following sequence:
> - dynamically generate an inproc:// address to use;
> - request libzmq to create and bind a PAIR socket to the address;
> - create a new PAIR socket and connect it to the inproc:// address;
> - return the connected PAIR socket.
>
> This sequence of events turned out to be the cause of the problem I
> observed. However, this common approach only seems to be a problem
> when using the socket in an event loop (async) which is watching the
> file descriptor for a ready_ready event.
>
> I eventually found this problem mentioned here:
> https://github.com/mkoppanen/php-zmq/issues/130 and then here
> http://lists.zeromq.org/pipermail/zeromq-dev/2014-January/024545.html
>
> I changed the socket monitor enabling code sequence to:
> - create a dynamic inproc:// address;
> - create a new PAIR socket and attempt to connect it to the address
>   (even through there is no PAIR socket bound to receive the
>   connection)
> - request libzmq to create and bind a PAIR socket to the address;
> - return the connected PAIR socket.
>
> The exact implementation is here:
> https://github.com/aio-libs/aiozmq/blob/master/aiozmq/core.py#L537
>
> Using this sequence I was able to reliably monitor the PAIR socket's
> file descriptor to receive socket monitor events while running within
> the asyncio event loop.
>
> Regards, Chris
>
> _
> zeromq-dev mailing list zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] More Questions about C4.1

2015-07-01 Thread Paulmichael Blasucci
Awesome! Thanks for the reply.

On Wed, Jul 1, 2015 at 11:54 AM, Pieter Hintjens  wrote:

> You're over thinking it... one commit per problem is nice. Multiple
> commits in a pull request is typically fine too. It breaks the "pull
> request = commit" assumption which you'll see C4.1 is fine with.
>
> The simplest flow is to queue up commits and merge as many as you can
> in one go. This works well IF people make one commit per problem and
> write their commit messages properly.
>
> -Pieter
>
> On Wed, Jul 1, 2015 at 5:52 PM, Luna Duclos
>  wrote:
> > You could use one branch per PR and not have this problem.
> >
> > On Wed, Jul 1, 2015 at 5:44 PM, Paulmichael Blasucci <
> pblasu...@gmail.com>
> > wrote:
> >>
> >> So, in general, it's one patch per problem -- which is good. However,
> the
> >> way GitHub tacks subsequent commits onto existing pull requests
> presents a
> >> bit of a dilemma. Do I keep piling up patches into one big merge? Or do
> I
> >> wait for my pull request to be merged before sending over new changes?
> (or
> >> am I over-thinking the whole process and/or missing some obvious
> feature of
> >> GitHub?)
> >>
> >> Thanks!
> >>
> >>
> >> ___
> >> zeromq-dev mailing list
> >> zeromq-dev@lists.zeromq.org
> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>
> >
> >
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] More Questions about C4.1

2015-07-01 Thread Pieter Hintjens
You're over thinking it... one commit per problem is nice. Multiple
commits in a pull request is typically fine too. It breaks the "pull
request = commit" assumption which you'll see C4.1 is fine with.

The simplest flow is to queue up commits and merge as many as you can
in one go. This works well IF people make one commit per problem and
write their commit messages properly.

-Pieter

On Wed, Jul 1, 2015 at 5:52 PM, Luna Duclos
 wrote:
> You could use one branch per PR and not have this problem.
>
> On Wed, Jul 1, 2015 at 5:44 PM, Paulmichael Blasucci 
> wrote:
>>
>> So, in general, it's one patch per problem -- which is good. However, the
>> way GitHub tacks subsequent commits onto existing pull requests presents a
>> bit of a dilemma. Do I keep piling up patches into one big merge? Or do I
>> wait for my pull request to be merged before sending over new changes? (or
>> am I over-thinking the whole process and/or missing some obvious feature of
>> GitHub?)
>>
>> Thanks!
>>
>>
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zyre question

2015-07-01 Thread Pieter Hintjens
On Tue, Jun 30, 2015 at 7:46 PM, Kalyan Bade  wrote:

> Isn't the current node need to know who are all part of the multicast group
> in order for SHOUT to be reliably transferred to all members?

There is no "all" and no "reliably"... nodes broadcast JOIN messages,
and every node maintains its own vision of what groups exist. With
pauses and handshaking you can be more or less sure of membership.

> We did play around with different sleep values just before sending the
> SHOUT. It has become less prevalent with higher sleep values. But, I was
> looking for a deterministic solution as we are seeing the issue rarely with
> a sleep as high as 10 seconds

That is abnormal for inproc; you'd expect everything to be settled in
a second at most. You may have some other issues slowing down thread
startup.

> Btw, is gossip network not mandatory for forming the zyre mesh network?

Over WiFi or Ethernet you'd use beaconing to start with, and gossip if
you have a larger network and you want to control discovery. Over
inproc there's no alternative to gossip.

-Pieter
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] More Questions about C4.1

2015-07-01 Thread Luna Duclos
You could use one branch per PR and not have this problem.

On Wed, Jul 1, 2015 at 5:44 PM, Paulmichael Blasucci 
wrote:

> So, in general, it's one patch per problem -- which is good. However, the
> way GitHub tacks subsequent commits onto existing pull requests presents a
> bit of a dilemma. Do I keep piling up patches into one big merge? Or do I
> wait for my pull request to be merged before sending over new changes? (or
> am I over-thinking the whole process and/or missing some obvious feature of
> GitHub?)
>
> Thanks!
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] More Questions about C4.1

2015-07-01 Thread Paulmichael Blasucci
So, in general, it's one patch per problem -- which is good. However, the
way GitHub tacks subsequent commits onto existing pull requests presents a
bit of a dilemma. Do I keep piling up patches into one big merge? Or do I
wait for my pull request to be merged before sending over new changes? (or
am I over-thinking the whole process and/or missing some obvious feature of
GitHub?)

Thanks!
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] socket event monitoring problem when using an event loop

2015-07-01 Thread Chris Laws
I was recently working on getting the socket event monitor working in an
async focused Python library, aiozmq, and I had great difficulty getting it
working reliably. Hopefully this post might save someone some time if they
are trying something similar in the future.

tl;dr when using async programming model (e.g. event loop) connect before
bind when using using the socket monitor PAIR sockets.

The library I was working with uses pyzmq for lots of its internals. It is
worth noting that I had no problem using the socket monitor from pyzmq
along with the typical blocking style approach where I would run the socket
event monitor poller in a separate thread.

However when I tried to use the socket monitor along with the asynchronous
model encouraged by the Python (3.4) asyncio module I found that the
delivery of socket events was very unreliable - often the events would only
come through when I was shutting down the monitor transport (the socket) at
the end of a test.

I started out using pyzmq's get_monitor_socket function which would enable
the socket monitor using the following sequence:
- dynamically generate an inproc:// address to use;
- request libzmq to create and bind a PAIR socket to the address;
- create a new PAIR socket and connect it to the inproc:// address;
- return the connected PAIR socket.

This sequence of events turned out to be the cause of the problem I
observed. However, this common approach only seems to be a problem when
using the socket in an event loop (async) which is watching the file
descriptor for a ready_ready event.

I eventually found this problem mentioned here:
https://github.com/mkoppanen/php-zmq/issues/130 and then here
http://lists.zeromq.org/pipermail/zeromq-dev/2014-January/024545.html

I changed the socket monitor enabling code sequence to:
- create a dynamic inproc:// address;
- create a new PAIR socket and attempt to connect it to the address (even
through there is no PAIR socket bound to receive the connection)
- request libzmq to create and bind a PAIR socket to the address;
- return the connected PAIR socket.

The exact implementation is here:
https://github.com/aio-libs/aiozmq/blob/master/aiozmq/core.py#L537

Using this sequence I was able to reliably monitor the PAIR socket's file
descriptor to receive socket monitor events while running within the
asyncio event loop.

Regards,
Chris
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] missing messages on 40GbE network

2015-07-01 Thread Marko Vendelin
Hi Ben,

any idea on how can I check it? Should error message come through ZMQ
or somehow from kernel?

Marko

On Wed, Jul 1, 2015 at 3:40 PM, Ben Kloosterman  wrote:
> more likely the nic buffer ,driver than zeromq.
>
> ben
>
> On Wed, Jul 1, 2015 at 9:45 PM, Marko Vendelin  wrote:
>>
>> Dear ØMQ developers:
>>
>> Synopsis: I am observing a strange interaction between storing
>> datastream on harddisks and a loss of ZeroMQ messages. It seems that
>> in my use case, when messages are larger than 2MB, some of them are
>> randomly dropped.
>>
>> Full story:
>>
>> I need to pump images acquired by fast scientific cameras into the
>> files with the rates approaching 25Gb/s. For that, images are acquired
>> in one server and transferred into the harddisk array using 40Gb/s
>> network. Since Linux-based solutions using iSCSI were not working very
>> well (maybe need to optimize more) and plain network applications
>> could use the full bandwidth, I decided to use RAID-0 inspired
>> approach: make filesystem on each of 32 harddisks separately, run
>> small slave programs one per filesystem and let the slaves ask the
>> dataset server for a dataset in a loop. As a messaging system, I use
>> ZeroMQ and REQ/REP connection. In general, all seem to work perfectly:
>> I am able to stream and record data at about 36Gb/s rates. However, at
>> some point (within 5-10 min), sometimes messages get lost.
>> Intriguingly, this occurs only if I write files and messages are 2MB
>> or larger. Much smaller messages do not seem to trigger this effect.
>> If I just stream data and either dump it or just calculate on the
>> basis of it, all messages go through. All messages go through if I use
>> 1Gb network.
>>
>> While in production code I stream data into HDF5, use zmqpp and
>> pooling to receive messages, I have reduced the problematic code into
>> the simplest case using zmq.hpp, regular files, and plain send/recv
>> calls. Code is available at
>>
>> http://www.ioc.ee/~markov/zmq/problem-missing-messages/
>>
>> At the same time, there don't seem to be any excessive drops in
>> ethernet cards, as reported by ifconfig in Linux (slaves run on
>> Gentoo, server on Ubuntu):
>>
>>
>> ens1f1: flags=4163  mtu 9000
>> inet 192.168.38.1  netmask 255.255.255.252  broadcast 192.168.38.3
>> inet6 fe80::225:90ff:fe9c:62c3  prefixlen 64  scopeid 0x20
>> ether 00:25:90:9c:62:c3  txqueuelen 1000  (Ethernet)
>> RX packets 8568340799  bytes 76612663159251 (69.6 TiB)
>> RX errors 7  dropped 0  overruns 0  frame 7
>> TX packets 1558294820  bytes 93932603947 (87.4 GiB)
>> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>>
>> eth3  Link encap:Ethernet  HWaddr 00:25:90:9c:63:1a
>>   inet addr:192.168.38.2  Bcast:192.168.38.3  Mask:255.255.255.252
>>   inet6 addr: fe80::225:90ff:fe9c:631a/64 Scope:Link
>>   UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
>>   RX packets:1558294810 errors:0 dropped:0 overruns:0 frame:0
>>   TX packets:8570261350 errors:0 dropped:0 overruns:0 carrier:0
>>   collisions:0 txqueuelen:1000
>>   RX bytes:102083292705 (102.0 GB)  TX bytes:76629844394725 (76.6
>> TB)
>>
>>
>> So, it should not be a simple dropped frames problem.
>>
>> Since the problem occurs only with larger messages, is there any
>> size-limited buffer in ZeroMQ that may cause dropping of the messages?
>> Or any other possible solution?
>>
>> Thank you for your help,
>>
>> Marko
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] missing messages on 40GbE network

2015-07-01 Thread Ben Kloosterman
more likely the nic buffer ,driver than zeromq.

ben

On Wed, Jul 1, 2015 at 9:45 PM, Marko Vendelin  wrote:

> Dear ØMQ developers:
>
> Synopsis: I am observing a strange interaction between storing
> datastream on harddisks and a loss of ZeroMQ messages. It seems that
> in my use case, when messages are larger than 2MB, some of them are
> randomly dropped.
>
> Full story:
>
> I need to pump images acquired by fast scientific cameras into the
> files with the rates approaching 25Gb/s. For that, images are acquired
> in one server and transferred into the harddisk array using 40Gb/s
> network. Since Linux-based solutions using iSCSI were not working very
> well (maybe need to optimize more) and plain network applications
> could use the full bandwidth, I decided to use RAID-0 inspired
> approach: make filesystem on each of 32 harddisks separately, run
> small slave programs one per filesystem and let the slaves ask the
> dataset server for a dataset in a loop. As a messaging system, I use
> ZeroMQ and REQ/REP connection. In general, all seem to work perfectly:
> I am able to stream and record data at about 36Gb/s rates. However, at
> some point (within 5-10 min), sometimes messages get lost.
> Intriguingly, this occurs only if I write files and messages are 2MB
> or larger. Much smaller messages do not seem to trigger this effect.
> If I just stream data and either dump it or just calculate on the
> basis of it, all messages go through. All messages go through if I use
> 1Gb network.
>
> While in production code I stream data into HDF5, use zmqpp and
> pooling to receive messages, I have reduced the problematic code into
> the simplest case using zmq.hpp, regular files, and plain send/recv
> calls. Code is available at
>
> http://www.ioc.ee/~markov/zmq/problem-missing-messages/
>
> At the same time, there don't seem to be any excessive drops in
> ethernet cards, as reported by ifconfig in Linux (slaves run on
> Gentoo, server on Ubuntu):
>
>
> ens1f1: flags=4163  mtu 9000
> inet 192.168.38.1  netmask 255.255.255.252  broadcast 192.168.38.3
> inet6 fe80::225:90ff:fe9c:62c3  prefixlen 64  scopeid 0x20
> ether 00:25:90:9c:62:c3  txqueuelen 1000  (Ethernet)
> RX packets 8568340799  bytes 76612663159251 (69.6 TiB)
> RX errors 7  dropped 0  overruns 0  frame 7
> TX packets 1558294820  bytes 93932603947 (87.4 GiB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> eth3  Link encap:Ethernet  HWaddr 00:25:90:9c:63:1a
>   inet addr:192.168.38.2  Bcast:192.168.38.3  Mask:255.255.255.252
>   inet6 addr: fe80::225:90ff:fe9c:631a/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
>   RX packets:1558294810 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:8570261350 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:102083292705 (102.0 GB)  TX bytes:76629844394725 (76.6
> TB)
>
>
> So, it should not be a simple dropped frames problem.
>
> Since the problem occurs only with larger messages, is there any
> size-limited buffer in ZeroMQ that may cause dropping of the messages?
> Or any other possible solution?
>
> Thank you for your help,
>
> Marko
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] missing messages on 40GbE network

2015-07-01 Thread Marko Vendelin
Dear ØMQ developers:

Synopsis: I am observing a strange interaction between storing
datastream on harddisks and a loss of ZeroMQ messages. It seems that
in my use case, when messages are larger than 2MB, some of them are
randomly dropped.

Full story:

I need to pump images acquired by fast scientific cameras into the
files with the rates approaching 25Gb/s. For that, images are acquired
in one server and transferred into the harddisk array using 40Gb/s
network. Since Linux-based solutions using iSCSI were not working very
well (maybe need to optimize more) and plain network applications
could use the full bandwidth, I decided to use RAID-0 inspired
approach: make filesystem on each of 32 harddisks separately, run
small slave programs one per filesystem and let the slaves ask the
dataset server for a dataset in a loop. As a messaging system, I use
ZeroMQ and REQ/REP connection. In general, all seem to work perfectly:
I am able to stream and record data at about 36Gb/s rates. However, at
some point (within 5-10 min), sometimes messages get lost.
Intriguingly, this occurs only if I write files and messages are 2MB
or larger. Much smaller messages do not seem to trigger this effect.
If I just stream data and either dump it or just calculate on the
basis of it, all messages go through. All messages go through if I use
1Gb network.

While in production code I stream data into HDF5, use zmqpp and
pooling to receive messages, I have reduced the problematic code into
the simplest case using zmq.hpp, regular files, and plain send/recv
calls. Code is available at

http://www.ioc.ee/~markov/zmq/problem-missing-messages/

At the same time, there don't seem to be any excessive drops in
ethernet cards, as reported by ifconfig in Linux (slaves run on
Gentoo, server on Ubuntu):


ens1f1: flags=4163  mtu 9000
inet 192.168.38.1  netmask 255.255.255.252  broadcast 192.168.38.3
inet6 fe80::225:90ff:fe9c:62c3  prefixlen 64  scopeid 0x20
ether 00:25:90:9c:62:c3  txqueuelen 1000  (Ethernet)
RX packets 8568340799  bytes 76612663159251 (69.6 TiB)
RX errors 7  dropped 0  overruns 0  frame 7
TX packets 1558294820  bytes 93932603947 (87.4 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth3  Link encap:Ethernet  HWaddr 00:25:90:9c:63:1a
  inet addr:192.168.38.2  Bcast:192.168.38.3  Mask:255.255.255.252
  inet6 addr: fe80::225:90ff:fe9c:631a/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
  RX packets:1558294810 errors:0 dropped:0 overruns:0 frame:0
  TX packets:8570261350 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:102083292705 (102.0 GB)  TX bytes:76629844394725 (76.6 TB)


So, it should not be a simple dropped frames problem.

Since the problem occurs only with larger messages, is there any
size-limited buffer in ZeroMQ that may cause dropping of the messages?
Or any other possible solution?

Thank you for your help,

Marko
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev