I was running a program that allocated a couple of hundred REQ sockets to 
connect to a server with a single REP socket via a QUEUE device.

I noticed that when I allocated >64 sockets I started getting an assertion 
failure.

  Assertion failed: nbytes == sizeof (command_t)  (mailbox.cpp:193)

I examined the code and saw a comment explaining how this should *never* happen 
since the command size was already confirmed as being less than the PIPE_BUF. 
However, I noted a reference to SNDBUF higher up in that same function. I'm 
wondering if I am exhausting a system resource (SNDBUF in this case) by 
allocating so many sockets.

BTW, this shell has a very high file descriptor limit (25k) so I know I haven't 
hit that particular limit.

I can show the code but it's ruby and I know that gives some folks hives. :)

Just imagine a loop that allocates several hundred or several thousand REQ 
sockets that are all connecting to the same QUEUE device endpoint.

cr

_______________________________________________
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to