Re: [zeromq-dev] Odd numbers with zeromq

2012-09-21 Thread Robert G. Jakabosky
On Wednesday 19, Christian Martinez wrote:
 Maybe I'm missing something here, but are people asserting here that one
 can't do a request reply MEP with various RPC technologies and exceed 1000
 1KB messages a second?

With only 1 client and 1 concurrent request, then yes.  The msg/sec throughput 
will be limited by latency in this case.  Both client  server will be maxing 
out their CPUs, each will be sitting idle most of the time waiting for a reply 
from the other side.  If you want more throughput, then you need to increase 
the number of concurrent requests sent by the clients.

I did a quick comparison of zmq  (nginx + ab).

Hardware:
Client: Laptop 100Mbit connection 1 core 2.2Ghz AMD
Server: desktop 1Gbit connection 8 core 3.6Ghz AMD

Test parameters:
requests: 400,000
max concurrent: 8 concurrent requests, 8 concurrent TCP sockets
request/response size: 1k

zmq: (using lua-zmq + LuaJIT, see attached Lua scripts)
Client: 8 threads, so 8 concurrent requests, cpu maxed out.
Server: 1 thread, low cpu usage
throughput: 10,990 reqs/sec, about 90Mbits (both directions)
bottleneck: client cpu maxed  client bandwidth.

nginx: (1k post data including HTTP headers, 1k response data including HTTP 
headers)
Client: ab (apache bench) keep-alive, 8 concurrent requests, cpu maxed out.
Server: nginx keepalive_requests = 400,000, limited to 1 worker process.
throughput: 9,055 reqs/sec, about 74Mbits (both directions)
bottleneck: client cpu maxed.

For reference using PUB/SUB the server can push 1k size messages out at 11,335 
msg/sec, or about 92Mbits (in one direction server - client).  The bottleneck 
here is the 100Mbit client connection.

-- 
Robert G. Jakabosky
-- Copyright (c) 2010 Aleksey Yeschenko alek...@yeschenko.com
--
-- Permission is hereby granted, free of charge, to any person obtaining a copy
-- of this software and associated documentation files (the Software), to deal
-- in the Software without restriction, including without limitation the rights
-- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-- copies of the Software, and to permit persons to whom the Software is
-- furnished to do so, subject to the following conditions:
--
-- The above copyright notice and this permission notice shall be included in
-- all copies or substantial portions of the Software.
--
-- THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-- THE SOFTWARE.

if not arg[3] then
print(usage: lua local_lat.lua bind-to message-size roundtrip-count)
os.exit()
end

local bind_to = arg[1]
local message_size = tonumber(arg[2])
local roundtrip_count = tonumber(arg[3])

local zmq = requirezmq

local ctx = zmq.init(1)
local s = ctx:socket(zmq.REP)
s:bind(bind_to)

local msg = zmq.zmq_msg_t()

local timer

for i = 1, roundtrip_count do
	assert(s:recv_msg(msg))
	if not timer then
		timer = zmq.stopwatch_start()
	end
	assert(msg:size() == message_size, Invalid message size)
	assert(s:send_msg(msg))
end

local elapsed = timer:stop()

s:close()
ctx:term()

local latency = elapsed / roundtrip_count / 2

print(string.format(mean latency: %.3f [us], latency))
local secs = elapsed / (1000 * 1000)
print(string.format(elapsed = %f, secs))
print(string.format(msg/sec = %f, roundtrip_count / secs))

-- Copyright (c) 2011 Robert G. Jakabosky bo...@sharedrealm.com
--
-- Permission is hereby granted, free of charge, to any person obtaining a copy
-- of this software and associated documentation files (the Software), to deal
-- in the Software without restriction, including without limitation the rights
-- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-- copies of the Software, and to permit persons to whom the Software is
-- furnished to do so, subject to the following conditions:
--
-- The above copyright notice and this permission notice shall be included in
-- all copies or substantial portions of the Software.
--
-- THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-- THE SOFTWARE.

if #arg  1 then
print(usage: lua  .. arg[0] ..  [message-size] [roundtrip-count] [concurrent] [connect-to])
end

local message_size = tonumber(arg[1] or 1)
local roundtrip_count = tonumber(arg[2

Re: [zeromq-dev] Too much ZeroMQ overhead versus plain TCP Java NIO Epoll (with measurements)

2012-08-29 Thread Robert G. Jakabosky
On Wednesday 29, Julie Anderson wrote:
 So questions remain:
 
 1) What does ZeroMQ do under the rood that justifies so many extra clock
 cycles? (I am really curious to know)

ZeroMQ is using background IO threads to do the sending/receiving.  So the 
extra latency is do to passing the messages between the application thread and 
the IO thread.


 2) Do people agree that 11 microseconds are just too much?

No.  A simple IO event loop using epoll is fine for a IO (network) bound 
application, but if you need to do complex work (cpu bound) mixed with non-
blocking IO, then ZeroMQ can make it easy to scale.

Also try comparing the latency of Java NIO using TCP/UDP against ZeroMQ using 
the inproc transport using two threads in the same JVM instance.

With ZeroMQ it is easy to do thread to thread, process to process, and/or 
server to server communication all at the same time using the same interface.

Basically ZeroMQ has different use-case then a simple IO event loop.

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Too much ZeroMQ overhead versus plain TCP Java NIO Epoll (with measurements)

2012-08-29 Thread Robert G. Jakabosky
On Wednesday 29, Julie Anderson wrote:
 See my comments below:
 
 On Wed, Aug 29, 2012 at 4:06 PM, Robert G. Jakabosky
 
 bo...@sharedrealm.comwrote:
  On Wednesday 29, Julie Anderson wrote:
   So questions remain:
   
   1) What does ZeroMQ do under the rood that justifies so many extra
   clock cycles? (I am really curious to know)
  
  ZeroMQ is using background IO threads to do the sending/receiving.  So
  the extra latency is do to passing the messages between the application
  thread and
  the IO thread.
 
 This kind of thread architecture sucks for latency sensitive applications.
 That's why non-blocking I/O exists. That's my humble opinion and the
 numbers support it.

If low-latency is the most important thing for your application, then use a 
custom protocol  highly tuned network code.

ZeroMQ is not a low-level networking library it provide some high-level 
features that are not available with raw sockets.

If you are planing on doing high-frequency trading, then you will need to 
write your own networking code (or FPGA logic) to squeeze out every last 
micro/nanosecond.  ZeroMQ is not going to be the right solution to every use-
case.


   2) Do people agree that 11 microseconds are just too much?
  
  No.  A simple IO event loop using epoll is fine for a IO (network) bound
  application, but if you need to do complex work (cpu bound) mixed with
  non- blocking IO, then ZeroMQ can make it easy to scale.
 
 Totally agree, but that has nothing to do with a financial application.
 Financial applications do not need to do complex CPU bound analysis like a
 image processing application would need. Financial application only cares
 about LATENCY and network I/O.

Not all Financial application care only about latency.  For some system it is 
important to scale out to very large number of subscribers and large volume of 
messages.

When comparing ZeroMQ to raw network IO for one connection, ZeroMQ will have 
more latency overhead.  Try your test with many thousands of connections with 
subscriptions to lots of different topics, then ZeroMQ will start to come out 
ahead.


  Also try comparing the latency of Java NIO using TCP/UDP against ZeroMQ
  using
  the inproc transport using two threads in the same JVM instance.
 
 What is the problem with inproc? Just use a method call in the same JVM or
 shared memory for different JVMs. If you want inter-thread communication
 there are blazing-fast solutions in Java for that too. For example, I would
 be surprised if ZeroMQ can come close to Disruptor for inter-thread
 communication.

ZeroMQ's inproc transport can be used in an event loop along side the TCP and 
IPC transports.  With ZeroMQ you can mix-and-match transports as needed.  If 
you can do all that with custom code with lower latency, then do it.  ZeroMQ 
is for people who don't have the experience to do that kind of thread-safe 
programming, or just want to scale out there application.


  With ZeroMQ it is easy to do thread to thread, process to process, and/or
  server to server communication all at the same time using the same
  interface.
 
 This generic API is cool, but it is solving a problem financial systems do
 not have and creating a bigger problem by adding latency.

ZeroMQ is not adding latency for no reason.  If you think that the latency can 
be eliminated, then go ahead and change the core code to not use IO threads.

  Basically ZeroMQ has different use-case then a simple IO event loop.
 
 I thought ZeroMQ flagship customers were financial institutions. Then maybe
 I was wrong.

ZeroMQ is competing with other Message-oriented middleware, like RabbitMQ, 
SwiftMQ, JMS, or other Message queuing systems.  These systems are popular 
with financial institutions.


-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Too much ZeroMQ overhead versus plain TCP Java NIO Epoll (with measurements)

2012-08-29 Thread Robert G. Jakabosky
On Wednesday 29, Julie Anderson wrote:
 
 Nothing is perfect. I am just trying to understand ZeroMQ approach and its
 overhead on top of the raw network latency. Maybe a single-threaded ZeroMQ
 implementation for the future using non-blocking I/O?

You might be interested in xsnano [1] which is an experimental project to try 
different threading models (should be possible to support single-thread 
model).  I am not sure how far along it is.


1. https://github.com/sustrik/xsnano



-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Too much ZeroMQ overhead versus plain TCP Java NIO Epoll (with measurements)

2012-08-29 Thread Robert G. Jakabosky
On Wednesday 29, Stuart Brandt wrote:
 Not sure I want to step into the middle of this, but here we go. I'd be
 really hesitant to base any evaluation of ZMQ's suitability for a highly
 scalable low latency application on local_lat/remote_lat. They appear to
 be single threaded synchronous tests which seems very unlike the kinds
 of applications being discussed (esp. if you're using NIO). More
 realistic is a network connection getting slammed with lots of
 concurrent sends and recvswhich is where lots of mistakes can be
 made if you roll your own.

local_lat/remote_lat both have two threads (one for the application and one 
for IO).  So each request message goes from:
1. local_lat to IO thread
2. IO thread send to tcp socket
- network stack.
3. recv from tcp socket in remote_lat's IO thread
4. from IO thread to remote_lat
5. remote_lat back to IO thread
6. IO thread send to tcp socket
- network stack.
7. recv from tcp socket in local_lat's IO thread
8. IO thread to local_lat.

So each message has to pass between threads 4 times (1,4,5,8) and go across 
the tcp socket 2 times (2-3, 6-7).

I think it would be interesting to see how latency is effected when there are 
many clients sending requests to a server (with one or more worker threads).  
With ZeroMQ it is very easy to create a server with one or many worker threads 
and handle many thousands of clients.  Doing the same without ZeroMQ is 
possible, but requires writing a lot more code.  But then writing it yourself 
will allow you to optimize it to your needs (latency vs throughput).

 As a purely academic discussion, though, I've uploaded raw C socket
 versions of a client and server that can be used to mimic local_lat and
 remote_lat -- at least for TCP sockets. On my MacBook, I get ~18
 microseconds per 40 byte packet across a test of 100 packets on
 local loopback. This is indeed about half of what I get with
 local_lat/remote_lat on tcp://127.0.0.1.
http://pastebin.com/4SSKbAgx   (echoloopcli.c)
http://pastebin.com/rkc6itTg  (echoloopsrv.c)
 
 There's probably some amount of slop/unfairness in there since I cut a
 lot of corners, so if folks want to pursue the comparison further, I'm
 more than willing to bring it closer to apples-to-apples.

echoloop*.c is testing throughput not latency, since it sends all messages at 
once instead of sending one message and waiting for it to return before 
sending the next message.  Try comparing it with local_thr/remote_thr.

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Too much ZeroMQ overhead versus plain TCP Java NIO Epoll (with measurements)

2012-08-29 Thread Robert G. Jakabosky
On Wednesday 29, Stuart Brandt wrote:
 Inline
 
 On 8/29/2012 10:37 PM, Robert G. Jakabosky wrote:
  echoloop*.c is testing throughput not latency, since it sends all
  messages at once instead of sending one message and waiting for it to
  return before sending the next message. Try comparing it with
  local_thr/remote_thr.
 
 Echoloopcli does a synchronous send, then a synchronous recv , then does
 it all again.  Echoloopsrv does a synchronous recv, then a synchronous
 send, then does it all again.  I stuck a while loop around the send call
 because it isn't guaranteed to complete with all bytes of my 40 byte
 packet having been sent. But since my send queue never maxes out, the
 'while' around send is overkill -- I get exactly 100 sends
 interleaved with 100 recvs.

ah, sorry I over looked the outer loop.  So it is doing request/response, 
instead of bulk send/recv like I had though.

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zeromq and libuv

2012-03-26 Thread Robert G. Jakabosky
On Sunday 25, rektide wrote:
 Robert, thanks for the reply. I confess to not understanding the issue
 fully. Are there any workarounds or possibilities you'd want to explore, to
 test out, as a possible way forwards? I'm just looking to pull whatever
 constructive guidanc eI can from you (or any others on the ml) on this
 issue.

If you don't have to support Windows, then you can by-pass lib-uv and create 
an ev_io watcher for zmq socket's FD with lib-ev's default loop.

I can't think of away to make zmq work with Window's IOCP without changing 
zmq's API.

 On Wed, Mar 21, 2012 at 11:49 PM, Robert G. Jakabosky
 bo...@sharedrealm.com
 
  wrote:
  
  On Monday 19, John D. Mitchell wrote:
   Hmm... Something is very confused in this discussion...
   
   The zeromq.node project and the performance problems there have to do
  
  that
  
   projects's Node.js binding with the zeromq libraries. I.e. it doesn't
  
  seem
  
   to have much, if anything to do with libuv itself.
  
  libuv doesn't provide a way to register a FD for io events.  It only
  allows starting an async read/write, with notification of when the
  read/write finishes.  The FD for a zmq socket can't be read from by the
  application, so
  it can't be used with libev.  When using zmq sockets in an event loop,
  one has
  to register for notification of read events on the FD and then check for
  the
  true events with the ZMQ_EVENTS socket option.
  
  I think the main reason libuv is designed like that is because that is
  how Window's IOCP works.
  
  As far as I can tell zeromq.node only works on *nix systems (where libuv
  uses
  libev as the event loop backend).  zeromq.node has to bypass libuv and
  use a
  libev IO watcher directly to work.
  
  The Luvit [1] (LuaJIT + libuv, like nodejs but with Lua instead of
  Javascript)
  has the same problem with trying to use the Lua bindings [2] for zmq.
  
  1. https://github.com/luvit/luvit
  2. https://github.com/Neopallium/lua-zmq
  
  --
  Robert G. Jakabosky
  ___
  zeromq-dev mailing list
  zeromq-dev@lists.zeromq.org
  http://lists.zeromq.org/mailman/listinfo/zeromq-dev


-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Epolling on FD doesn't work for 2.1.11

2012-03-23 Thread Robert G. Jakabosky
On Friday 23, Andrzej K. Haczewski wrote:
 2012/3/22 Robert G. Jakabosky bo...@sharedrealm.com:
  zmq sockets are alway level-triggered.  Your code should call
  zmq_send/zmq_recv until it returns EAGAIN, then register the FD for read
  events with your event loop.  Your code also needs to alway pass the
  ZMQ_NOBLOCK flag to zmq_send/zmq_recv.
  
  For an lib-ev event loop use an idle watcher to call zmq_recv when it is
  not block.  Once zmq_recv returns EAGAIN, stop the idle watcher and
  start an io watcher for read events.  The ev_io callback needs to get
  the value of ZMQ_EVENTS with zmq_getsockopt(), then check the events
  value for ZMQ_POLLIN. As soon as ZMQ_EVENTS has ZMQ_POLLIN stop the io
  watcher and start the idle watcher again.
  
  Basically if the zmq socket is readable, your code must keep calling
  zmq_recv() until it returns EAGAIN without waiting for another IO read
  event from the event loop.
  
  I have written an example [1] in Lua that reads from a zmq SUB socket
  using the lib-ev event loop.
  
  Also there is client [2] and server [3] example (using REQ/REP sockets)
  that can use either lib-ev [4] or epoll [5] event loops.
  
  I hope that helps.
  
  1. https://github.com/Neopallium/lua-
  zmq/blob/master/examples/subscriber_ev.lua
  2.
  https://github.com/Neopallium/lua-zmq/blob/master/examples/client_poll.l
  ua 3.
  https://github.com/Neopallium/lua-zmq/blob/master/examples/server_poll.l
  ua 4.
  https://github.com/Neopallium/lua-zmq/blob/master/examples/poller/ev.lua
  5.
  https://github.com/Neopallium/lua-zmq/blob/master/examples/poller/epoll.
  lua


A correction to my last email, I should have said zmq sockets are alway edge-
triggered.


 Thank you so much for your assistance. I'm refactoring my code to
 include the scheme you've proposed.
 
 There is one thing that bothers me though: why does the scheme I used
 works for ZeroMQ 3.1.0 and CrossroadsIO, as I tired both and they work
 with registering FD right away with no recv() calls in between
 connect() and epoll(), and it doesn't work for ZeroMQ 2.1.

Version 3.1 might be writing something to the socket's pipe, when new sockets 
are created vs. 2.1.  I haven't used 3.1 much, only did a little bit of 
testing when adding support for 3.1 to my Lua bindings.

Try using your code on a SUB socket with 3.1 and send a burst of messages.  I 
think you will still run into problems with 3.1 when many messages are ready 
to be received from the socket.

With 'edge-triggered' sockets your code must always keep reading from the 
socket until the read queue is empty.  Now this doesn't mean that you can't 
call poll/epoll to check for other events, just make sure you don't ask epoll 
to block for events (use timeout=0, for don't block).

What I recommend is to use some type of work queue (or Idle watchers with lib-
ev) to process 'edge-triggered' sockets that are not currently blocked.  The 
worker for each socket should only be allowed to call recv/send X times each 
time the worker is called.  Higher values of X give better through-put on 
sockets transferring lots of data, but can increase latency for other sockets 
that only need to send/recv a small amount of data.  If the socket's worker 
hits the limit before getting EAGAIN, then it should yield back to the event 
loop.

 Also, I wonder if there might be a race in proposed approach, between
 getting EAGAIN and starting to watch FD, since it might be quite a lot
 of time between I process all the pending tasks I have on my queue
 (which that idle recv() task will belong to) and actually entering
 epoll. Or does ZeroMQ 2.1 guarantee that after throwing EAGAIN at user
 it will never clear pending event from socket before user has a chance
 of epolling on that socket's FD?

I think the only thing that can clear a sockets pending events are these 
calls:
zmq_recv()
zmq_send()
zmq_getsockopt(sock, ZMQ_EVENTS,...)

If ZMQ_EVENTS is 0 then it is safe to block for a read event.

Hmm, I wonder if there could be an issue with this case:
1. APP: zmq_recv() returns EAGAIN
2. APP: registers zmq socket's FD (i.e. it's pipe FD) with event loop for read 
event.
3. ZMQ_IO_THREAD: puts new message on read queue (one byte will be written too 
the socket's pipe).
4. APP: zmq_send() is called to send a message (this will consume the byte 
from the socket's pipe).
 at this point the APP should resume calling zmq_recv() until EAGAIN.
5. APP: event loop will block, even though a message can be read with 
zmq_recv().

But this would only be a problem for bi-directional (XREQ/XREP,DEALER/ROUTER) 
sockets that can send  recv at any time.

One way to handle this would be to mark the socket as recv blocked, then if 
zmq_send is called on a recv blocked socket, check ZMQ_EVENTS to see if it 
should be unblocked.  The reverse should be done for a send blocked socket 
when zmq_recv() is called, or just don't call zmq_recv on a socket that is 
send blocked.

-- 
Robert G. Jakabosky

Re: [zeromq-dev] Epolling on FD doesn't work for 2.1.11

2012-03-22 Thread Robert G. Jakabosky
On Tuesday 20, Andrzej K. Haczewski wrote:
 First of all I'd like to say hello to everyone, since this is my first
 post here. So:
 
 Hello,
 
 I've been trying to integrate ZeroMQ with libev event loop and I've
 hit a strange behavior (or a bug?). In particular, as anyone
 integrating with event loop, I've been watching (libev calls it like
 that) my sockets for any input events, passing file descriptors taken
 using getsockopt(ZMQ_FD).
 
 How surprised I was that for REQ and REP sockets only REP socket
 received notifications. Trying to isolate an error I simulated the
 behavior using Python and epoll:
 http://pastebin.com/kiBtu4jv
 
 On my system (linux 3.1.6 amd64) and ZeroMQ 2.1.11 the output of that
 code looks like this:
 [(16, 1)]
 SUCCESSFULLY RECEIVED REQUEST!
 []
 []
 []
 []
 []
 []
 ...
 
 Upgrading ZeroMQ to 3.1.0 the situation changes dramatically,
 outputting what I expected:
 [(9, 1), (10, 1)]
 SUCCESSFULLY RECEIVED REQUEST!
 [(9, 1), (10, 1)]
 RECEIVED GOOD REPLY!
 
 Trying to get help on #zeromq I was told that it might be an issue
 with level-triggering epoll, so I tried switching to edge-triggered
 poll. Unfortunately it doesn't work for neither 2.1.11 nor 3.1.0.

zmq sockets are alway level-triggered.  Your code should call 
zmq_send/zmq_recv until it returns EAGAIN, then register the FD for read 
events with your event loop.  Your code also needs to alway pass the 
ZMQ_NOBLOCK flag to zmq_send/zmq_recv.

For an lib-ev event loop use an idle watcher to call zmq_recv when it is not 
block.  Once zmq_recv returns EAGAIN, stop the idle watcher and start an io 
watcher for read events.  The ev_io callback needs to get the value of 
ZMQ_EVENTS with zmq_getsockopt(), then check the events value for ZMQ_POLLIN.  
As soon as ZMQ_EVENTS has ZMQ_POLLIN stop the io watcher and start the idle 
watcher again.

Basically if the zmq socket is readable, your code must keep calling 
zmq_recv() until it returns EAGAIN without waiting for another IO read event 
from the event loop.

I have written an example [1] in Lua that reads from a zmq SUB socket using 
the lib-ev event loop.

Also there is client [2] and server [3] example (using REQ/REP sockets) that 
can use either lib-ev [4] or epoll [5] event loops.

I hope that helps.

1. https://github.com/Neopallium/lua-
zmq/blob/master/examples/subscriber_ev.lua
2. https://github.com/Neopallium/lua-zmq/blob/master/examples/client_poll.lua
3. https://github.com/Neopallium/lua-zmq/blob/master/examples/server_poll.lua
4. https://github.com/Neopallium/lua-zmq/blob/master/examples/poller/ev.lua
5. https://github.com/Neopallium/lua-zmq/blob/master/examples/poller/epoll.lua

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zeromq and libuv

2012-03-21 Thread Robert G. Jakabosky
On Monday 19, John D. Mitchell wrote:
 Hmm... Something is very confused in this discussion...
 
 The zeromq.node project and the performance problems there have to do that
 projects's Node.js binding with the zeromq libraries. I.e. it doesn't seem
 to have much, if anything to do with libuv itself.

libuv doesn't provide a way to register a FD for io events.  It only allows 
starting an async read/write, with notification of when the read/write 
finishes.  The FD for a zmq socket can't be read from by the application, so 
it can't be used with libev.  When using zmq sockets in an event loop, one has 
to register for notification of read events on the FD and then check for the 
true events with the ZMQ_EVENTS socket option.

I think the main reason libuv is designed like that is because that is how 
Window's IOCP works.

As far as I can tell zeromq.node only works on *nix systems (where libuv uses 
libev as the event loop backend).  zeromq.node has to bypass libuv and use a 
libev IO watcher directly to work.

The Luvit [1] (LuaJIT + libuv, like nodejs but with Lua instead of Javascript) 
has the same problem with trying to use the Lua bindings [2] for zmq.

1. https://github.com/luvit/luvit
2. https://github.com/Neopallium/lua-zmq

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] zmq 2.1 dosn't follow the ZMTP/1.0 spec.

2011-12-06 Thread Robert G. Jakabosky
When sending messages with zmq 2.1.10 some of the reserved bits (1-6) are set 
to one instead of zero like the ZMTP/1.0 specs [1] says.  The spec says that 
they MUST be zero.  Having the reserved bits set to one will cause 
compatibility problems when later versions want to set those bits for new 
features, so I think this is a bug in the zmq library code.

I found this out when working on a zmq protocol dissector [2] for Wireshark.

1. http://rfc.zeromq.org/spec:13
2. https://github.com/Neopallium/lua-zmq/tree/master/ws

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zmq 2.1 dosn't follow the ZMTP/1.0 spec.

2011-12-06 Thread Robert G. Jakabosky
On Tuesday 06, Martin Sustrik wrote:
 Hi Robert,
 
 This seems to be a bug in 2.1. It doesn't happen with 3.0  3.1. Feel
 free to fill in an issue in the bug tracker.

https://zeromq.jira.com/browse/LIBZMQ-293


 A comment on wireshark dissector (I've written the AMQP dissector so I
 have some minimal experience in the area): IIRC wireshark starts
 dissecting every packet from the beginning. This assumes that the
 application protocol elements are aligned with TCP packets. This works
 OK in most cases as the applications tend to write protocol elements to
 the socket using separate send() invocations. That in turn in most cases
 results in protocol elements being aligned with TCP packets. However,
 given that 0MQ does aggressive message batching protocol elements are
 often not aligned with TCP packets :(

Yup, already have support for TCP reassembly to deal with the frames that 
don't align with the TCP segments.

The only issue right now is if someone starts capturing packets in the middle 
of a TCP session and the first packet captured start with a partial zmq frame, 
then it will be out-of-sync for all frames after that.

 This effect should be visible when sending a lot of messages at the same
 time. You can use throughput test in the perf ubdirectory to emulate it.

That is what I used for testing the dissector.  I have also tested multi-part 
messages.

One thing I might add later is the grouping of frames into messages, right now 
it just shows all the frames in one list grouped by the packet/tcp segment 
(reassembled frames end up in there own packet group).

 When sending messages sporadically, the dissector should work OK though.
 
 Martin
 
 On 7/12/2011, Robert G. Jakabosky bo...@sharedrealm.com wrote:
 When sending messages with zmq 2.1.10 some of the reserved bits (1-6) are
 set to one instead of zero like the ZMTP/1.0 specs [1] says.  The spec
 says that they MUST be zero.  Having the reserved bits set to one will
 cause compatibility problems when later versions want to set those bits
 for new features, so I think this is a bug in the zmq library code.
 
 I found this out when working on a zmq protocol dissector [2] for
 Wireshark.
 
 1. http://rfc.zeromq.org/spec:13
 2. https://github.com/Neopallium/lua-zmq/tree/master/ws
 
 --
 Robert G. Jakabosky
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev
 
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev


-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zmq 2.1 dosn't follow the ZMTP/1.0 spec.

2011-12-06 Thread Robert G. Jakabosky
On Tuesday 06, Pieter Hintjens wrote:
 On Wed, Dec 7, 2011 at 2:22 AM, Robert G. Jakabosky
 
 bo...@sharedrealm.com wrote:
  When sending messages with zmq 2.1.10 some of the reserved bits (1-6) are
  set to one instead of zero like the ZMTP/1.0 specs [1] says.  The spec
  says that they MUST be zero.
 
 Robert,
 
 Nice catch. Can you log an issue and also provide a test case? Thanks.

Attached a test case to the issue.
https://zeromq.jira.com/browse/LIBZMQ-293

 
 -Pieter
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev


-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] PUB-SUB works, server restarts, PUB-SUB fails for existing workers.

2011-08-30 Thread Robert G. Jakabosky
On Tuesday 30, Pieter Hintjens wrote:
 On Tue, Aug 30, 2011 at 5:49 AM, alotsof alot...@gmx.net wrote:
  What surprises me though is that I can reduce the problem to one of
  order of execution:
  
  - spawn worker first, setup 0MQ second: reconnection works.
  - setup 0MQ first, spawn worker second: reconnection fails.

The subprocess.Popen call most likely is using popen(), which uses fork() and 
that will cause the child process to inherit the parent's set of open file 
descriptors.  This means that the child process is keeping the server socket 
alive even when the parent closes the socket or dies.

One way to test this is to check if the server socket is still in the LISTEN 
state and owned by the worker process.
Run: netstat -pln | grep tcp

Look for the socket bound to your server's tcp port.  You can also grep for 
your worker's pid.

 
 To be honest, your test set-up is not clean. You should remove the
 start worker code from the publisher and start the two processes
 explicitly from a single shell script.

Do what Pieter said and start the worker from a shell script as a separate 
process (i.e. not a child of the server).

Or if you really need to start the workers from the server process, the use a 
wrapper script to daemonize (fork + execve) the worker process.

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Strange problem with lua-zmq on Windows

2011-08-10 Thread Robert G. Jakabosky
On Wednesday 10, Ross Andrews wrote:
 I'm not sure if this is a Windows problem, a Lua problem, a ZeroMQ problem,
 or a my-code problem, but after about three weeks I am finally able to
 reproduce it in a small example so here goes.
 
 Here's the server:
 
 require 'zmq'
 
 ctx = zmq.init(1)
 socket = ctx:socket(zmq.REP)
 socket:bind 'tcp://*:4568'
 n = 0
 
 while true do
socket:recv()
n = n + 1
print(n)
socket:send('pong')
 end
 
 socket:close()
 
 --
 
 And here's the client:
 
 require 'zmq'
 
 ctx = zmq.init(1)
 k = 0
 
 while k  10 do
local n, sockets = 0, {}
 
while n  50 do
   local socket = ctx:socket(zmq.REQ)
   socket:connect('tcp://localhost:4568')
   socket:send('ping')
   table.insert(sockets, socket)
   n = n + 1
end
 
for _, socket in ipairs(sockets) do
   print(socket:recv())
   socket:close()
end
 
k = k + 1
 end
 
 --
 
 The idea is that I have a client that accepts requests, spawns threads (or
 maybe uses a thread pool, either way I can't control it) to handle them.
 Handling them consists of creating a REQ socket, sending a request to
 another (single-threaded) thing to handle them and give a response back,
 and then returning the response. I have to make a new socket each time
 because I can't control which thread the requesting thing is on, and I
 can't share ZMQ sockets across threads.
 
 The problem is that this case thrashes and then eventually kills lua-zmq.
 It should handle 500 messages here, actually it dies at an indeterminate
 point between about 100 and about 400.
 
 Can anyone give me a clue what's going on?


Try the attached C version of those scripts to help narrow down where the 
problem is.  I have only tested the code on Linux, but it should be simple to 
compile it on Windows.

Using the C version on my Linux system with zmq 2.1.7, I see short pauses when 
running the client with these parameters:
./test_req_client 100 200

but it does finish all requests.  The pauses might be happening because of the 
high number of tcp connections being open/closed in a short period.

-- 
Robert G. Jakabosky
#include stdio.h
#include stdlib.h
#include assert.h
#include zmq.h

#define CHECK(fname) do { \
	if(rc != 0) { \
		printf(fname error: %s\n, zmq_strerror(errno)); \
		return -1; \
	} \
} while(0)

int main(int argc, char *argv[]) {
	zmq_msg_t msg;
	void *ctx;
	void *s;
	int rc;
	int n = 0;

	ctx = zmq_init(1);
	assert(ctx != NULL);

	s = zmq_socket(ctx, ZMQ_REP);
	assert(s != NULL);

	rc = zmq_bind(s, tcp://*:4568);
	CHECK(zmq_bind);

	rc = zmq_msg_init(msg);
	CHECK(zmq_msg_init);

	while(1) {
		rc = zmq_recv(s, msg, 0);
		CHECK(zmq_recv);
		n++;
		printf(%d\n, n);
		rc = zmq_send(s, msg, 0);
		CHECK(zmq_send);
	}

	rc = zmq_msg_close(msg);
	CHECK(zmq_msg_close);

	zmq_sleep(1);

	rc = zmq_close(s);
	CHECK(zmq_close);

	rc = zmq_term(ctx);
	CHECK(zmq_term);

	return 0;
}

#include stdio.h
#include stdlib.h
#include assert.h
#include string.h
#include zmq.h

#define MAX_SOCKETS 1
#define MAX_BUF 1024

#define CHECK(fname) do { \
	if(rc != 0) { \
		printf(fname error: %s\n, zmq_strerror(errno)); \
		return -1; \
	} \
} while(0)

int main(int argc, char *argv[]) {
	zmq_msg_t msg;
	void *ctx;
	void **sockets;
	void *s;
	char tmp[MAX_BUF];
	int max_k = 10;
	int max_n = 50;
	int rc;
	int n = 0;
	int k = 0;
	int len = 0;

	if(argc  1) {
		max_k = atoi(argv[1]);
	}

	if(argc  2) {
		max_n = atoi(argv[2]);
		assert(max_n  MAX_SOCKETS);
	}

	sockets = (void **)calloc(max_n, sizeof(void *));
	assert(sockets != NULL);

	ctx = zmq_init(1);
	assert(ctx != NULL);

	for(k = 0; k  max_k; k++) {
		for(n = 0; n  max_n; n++) {
			s = zmq_socket(ctx, ZMQ_REQ);
			assert(s != NULL);

			rc = zmq_connect(s, tcp://localhost:4568);
			CHECK(zmq_connect);

			rc = zmq_msg_init_data(msg, PING, 4, NULL, NULL);
			CHECK(zmq_msg_init_data);

			rc = zmq_send(s, msg, 0);
			CHECK(zmq_send);

			sockets[n] = s;
		}

		for(n = 0; n  max_n; n++) {
			s = sockets[n];

			rc = zmq_msg_init(msg);
			CHECK(zmq_msg_init);

			rc = zmq_recv(s, msg, 0);
			CHECK(zmq_recv);

			len = zmq_msg_size(msg);
			assert(len  MAX_BUF);
			memcpy(tmp, zmq_msg_data(msg), len);
			tmp[len] = '\0';
			printf(%s: k=%d, n=%d\n, tmp, k, n);

			rc = zmq_msg_close(msg);
			CHECK(zmq_msg_close);

			sockets[n] = NULL;
			rc = zmq_close(s);
			CHECK(zmq_close);
		}
	}

	zmq_sleep(1);

	for(n = 0; n  max_n; n++) {
		s = sockets[n];
		if(s == NULL) continue;
		sockets[n] = NULL;

		rc = zmq_close(s);
		CHECK(zmq_msg_close);
	}

	rc = zmq_term(ctx);
	CHECK(zmq_term);

	free(sockets);
	return 0;
}

___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] [PATCH] Add note about thread-safety to zmq_msg_init_data() manpage.

2011-04-03 Thread Robert G. Jakabosky
The manpage for the zmq_msg_init_data() function needs a note about thread-
safety of the deallocation callback function.

-- 
Robert G. Jakabosky
From a8379425d8e16e6922058c2844ab57a80406c071 Mon Sep 17 00:00:00 2001
From: Robert G. Jakabosky bo...@sharedrealm.com
Date: Sun, 3 Apr 2011 10:24:52 -0700
Subject: [PATCH] Add note about thread-safety to zmq_msg_init_data() manpage.

Signed-off-by: Robert G. Jakabosky bo...@sharedrealm.com
---
 doc/zmq_msg_init_data.txt |3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/doc/zmq_msg_init_data.txt b/doc/zmq_msg_init_data.txt
index 8378757..191019b 100644
--- a/doc/zmq_msg_init_data.txt
+++ b/doc/zmq_msg_init_data.txt
@@ -25,6 +25,9 @@ If provided, the deallocation function 'ffn' shall be called once the data
 buffer is no longer required by 0MQ, with the 'data' and 'hint' arguments
 supplied to _zmq_msg_init_data()_.
 
+CAUTION: The deallocation function 'ffn' needs to be thread-safe, since it
+will be called from an arbitrary thread.
+
 CAUTION: Never access 'zmq_msg_t' members directly, instead always use the
 _zmq_msg_ family of functions.
 
-- 
1.7.4.1

___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] thread_lat thread_thr performance examples.

2011-03-30 Thread Robert G. Jakabosky
To test thread-to-thread performance I combined the local_*/remote_* from 
zeromq2/perf/* performance examples into one process using threads and a 
shared 0MQ context.

These examples will default to using inproc:// transport.

The attached code has the same copyright  license notice as the other perf. 
examples since most of the code was just copied over.

The nice thing about these new examples is that it allows users to quickly see 
what type of latency/throughput they can get between threads using 0MQ.

-- 
Robert G. Jakabosky
/*
Copyright (c) 2007-2011 iMatix Corporation
Copyright (c) 2007-2011 Other contributors as noted in the AUTHORS file

This file is part of 0MQ.

0MQ is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by
the Free Software Foundation; either version 3 of the License, or
(at your option) any later version.

0MQ is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public License
along with this program.  If not, see http://www.gnu.org/licenses/.
*/

#include ../include/zmq.h
#include ../include/zmq_utils.h
#include stdio.h
#include stdlib.h
#include string.h
#include pthread.h

static const char *connect_to = inproc://lat_test;
static const char *bind_to = inproc://lat_test;
static int roundtrip_count = 10;
static size_t message_size = 1;

/*
 * local_lat - simulates the local_lat process.
 */
static void *
local_lat (void *ctx)
{
void *s;
int rc;
int i;
zmq_msg_t msg;

s = zmq_socket (ctx, ZMQ_REP);
if (!s) {
printf (error in zmq_socket: %s\n, zmq_strerror (errno));
return NULL;
}

rc = zmq_connect (s, connect_to);
if (rc != 0) {
printf (error in zmq_connect: %s\n, zmq_strerror (errno));
return NULL;
}

rc = zmq_msg_init (msg);
if (rc != 0) {
printf (error in zmq_msg_init: %s\n, zmq_strerror (errno));
return NULL;
}

for (i = 0; i != roundtrip_count; i++) {
rc = zmq_recv (s, msg, 0);
if (rc != 0) {
printf (error in zmq_recv: %s\n, zmq_strerror (errno));
return NULL;
}
if (zmq_msg_size (msg) != message_size) {
printf (message of incorrect size received\n);
return NULL;
}
rc = zmq_send (s, msg, 0);
if (rc != 0) {
printf (error in zmq_send: %s\n, zmq_strerror (errno));
return NULL;
}
}

rc = zmq_msg_close (msg);
if (rc != 0) {
printf (error in zmq_msg_close: %s\n, zmq_strerror (errno));
return NULL;
}

zmq_sleep (1);

rc = zmq_close (s);
if (rc != 0) {
printf (error in zmq_close: %s\n, zmq_strerror (errno));
return NULL;
}

return NULL;
}

/*
 * run remote_lat code in main thread and local_lat code into child thread.
 */
int main (int argc, char *argv [])
{
pthread_t local_thread;
void *ctx;
void *s;
int rc;
int i;
zmq_msg_t msg;
void *watch;
unsigned long elapsed;
double latency;

if (argc = 1) {
printf (usage: thread_lat [message-size] [roundtrip-count] [bind-to] [connect-to]\n);
}
if(argc  1) message_size = atoi(argv[1]);
if(argc  2) roundtrip_count = atoi(argv[2]);
if(argc  3) bind_to = argv[3];
if(argc  4) connect_to = argv[4];

ctx = zmq_init (1);
if (!ctx) {
printf (error in zmq_init: %s\n, zmq_strerror (errno));
return -1;
}

s = zmq_socket (ctx, ZMQ_REQ);
if (!s) {
printf (error in zmq_socket: %s\n, zmq_strerror (errno));
return -1;
}

rc = zmq_bind (s, bind_to);
if (rc != 0) {
printf (error in zmq_bind: %s\n, zmq_strerror (errno));
return -1;
}

printf (message size: %d [B]\n, (int) message_size);
printf (roundtrip count: %d\n, (int) roundtrip_count);

/* start thread for local_lat code. */
pthread_create (local_thread, NULL, local_lat, ctx);

rc = zmq_msg_init_size (msg, message_size);
if (rc != 0) {
printf (error in zmq_msg_init_size: %s\n, zmq_strerror (errno));
return -1;
}
memset (zmq_msg_data (msg), 0, message_size);

watch = zmq_stopwatch_start ();

for (i = 0; i != roundtrip_count; i++) {
rc = zmq_send (s, msg, 0);
if (rc != 0) {
printf (error in zmq_send: %s\n, zmq_strerror (errno));
return -1;
}
rc = zmq_recv (s, msg, 0);
if (rc != 0) {
printf (error in zmq_recv: %s\n, zmq_strerror (errno));
return -1;
}
if (zmq_msg_size (msg) != message_size) {
printf (message of incorrect size

Re: [zeromq-dev] thread_lat thread_thr performance examples. [PATCH]

2011-03-30 Thread Robert G. Jakabosky
On Wednesday 30, Martin Sustrik wrote:
 Hi Robert,
 
  To test thread-to-thread performance I combined the local_*/remote_* from
  zeromq2/perf/* performance examples into one process using threads and a
  shared 0MQ context.
  
  These examples will default to using inproc:// transport.
  
  The attached code has the same copyright  license notice as the other
  perf. examples since most of the code was just copied over.
  
  The nice thing about these new examples is that it allows users to
  quickly see what type of latency/throughput they can get between threads
  using 0MQ.
 
 Great work!
 
 I would like to include the code to zeromq2. Would you mind posting the
 signed off version of the patch?

I was hoping that you would include it in zeromq2.  Attached is the signed off 
patch.  This is the first signed off patch that I have made, so I hope I did 
it right.

-- 
Robert G. Jakabosky
From 95d3395ed57330d519fcb2fa1528053c3489214a Mon Sep 17 00:00:00 2001
From: Robert G. Jakabosky bo...@sharedrealm.com
Date: Wed, 30 Mar 2011 02:27:24 -0700
Subject: [PATCH] Adding thread latency/throughput perf. examples.


Signed-off-by: Robert G. Jakabosky bo...@sharedrealm.com
---
 perf/thread_lat.cpp |  196 
 perf/thread_thr.cpp |  207 +++
 2 files changed, 403 insertions(+), 0 deletions(-)
 create mode 100644 perf/thread_lat.cpp
 create mode 100644 perf/thread_thr.cpp

diff --git a/perf/thread_lat.cpp b/perf/thread_lat.cpp
new file mode 100644
index 000..f23f3dd
--- /dev/null
+++ b/perf/thread_lat.cpp
@@ -0,0 +1,196 @@
+/*
+Copyright (c) 2007-2011 iMatix Corporation
+Copyright (c) 2007-2011 Other contributors as noted in the AUTHORS file
+
+This file is part of 0MQ.
+
+0MQ is free software; you can redistribute it and/or modify it under
+the terms of the GNU Lesser General Public License as published by
+the Free Software Foundation; either version 3 of the License, or
+(at your option) any later version.
+
+0MQ is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public License
+along with this program.  If not, see http://www.gnu.org/licenses/.
+*/
+
+#include ../include/zmq.h
+#include ../include/zmq_utils.h
+#include stdio.h
+#include stdlib.h
+#include string.h
+#include pthread.h
+
+static const char *connect_to = inproc://lat_test;
+static const char *bind_to = inproc://lat_test;
+static int roundtrip_count = 10;
+static size_t message_size = 1;
+
+/*
+ * local_lat - simulates the local_lat process.
+ */
+static void *
+local_lat (void *ctx)
+{
+void *s;
+int rc;
+int i;
+zmq_msg_t msg;
+
+s = zmq_socket (ctx, ZMQ_REP);
+if (!s) {
+printf (error in zmq_socket: %s\n, zmq_strerror (errno));
+return NULL;
+}
+
+rc = zmq_connect (s, connect_to);
+if (rc != 0) {
+printf (error in zmq_connect: %s\n, zmq_strerror (errno));
+return NULL;
+}
+
+rc = zmq_msg_init (msg);
+if (rc != 0) {
+printf (error in zmq_msg_init: %s\n, zmq_strerror (errno));
+return NULL;
+}
+
+for (i = 0; i != roundtrip_count; i++) {
+rc = zmq_recv (s, msg, 0);
+if (rc != 0) {
+printf (error in zmq_recv: %s\n, zmq_strerror (errno));
+return NULL;
+}
+if (zmq_msg_size (msg) != message_size) {
+printf (message of incorrect size received\n);
+return NULL;
+}
+rc = zmq_send (s, msg, 0);
+if (rc != 0) {
+printf (error in zmq_send: %s\n, zmq_strerror (errno));
+return NULL;
+}
+}
+
+rc = zmq_msg_close (msg);
+if (rc != 0) {
+printf (error in zmq_msg_close: %s\n, zmq_strerror (errno));
+return NULL;
+}
+
+zmq_sleep (1);
+
+rc = zmq_close (s);
+if (rc != 0) {
+printf (error in zmq_close: %s\n, zmq_strerror (errno));
+return NULL;
+}
+
+return NULL;
+}
+
+/*
+ * run remote_lat code in main thread and local_lat code into child thread.
+ */
+int main (int argc, char *argv [])
+{
+pthread_t local_thread;
+void *ctx;
+void *s;
+int rc;
+int i;
+zmq_msg_t msg;
+void *watch;
+unsigned long elapsed;
+double latency;
+
+if (argc = 1) {
+printf (usage: thread_lat [message-size] [roundtrip-count] [bind-to] [connect-to]\n);
+}
+if(argc  1) message_size = atoi(argv[1]);
+if(argc  2) roundtrip_count = atoi(argv[2]);
+if(argc  3) bind_to = argv[3];
+if(argc  4) connect_to = argv[4];
+
+ctx = zmq_init (1);
+if (!ctx) {
+printf (error in zmq_init: %s\n, zmq_strerror (errno));
+return -1

Re: [zeromq-dev] Important: backward incompatible changes for 0MQ/3.0!

2011-03-24 Thread Robert G. Jakabosky
On Thursday 24, Douglas Creager wrote:
  1. If tcmalloc et al are brittle, do we understand why is it so? Is it
  an inherent problem or just a lame implementation?
 
 It wasn't itself tcmalloc that was brittle, it was Mac's equivalent of
 LD_PRELOAD.
 
  2. Douglas' patch stores the pointer to allocator function in a global
  variable. That breaks the model of separate contexts (i.e. that several
  instances of 0mq can exist in a single process without misinteracting
  with each other). We have to think of something more resilient.
  
  Perhaps the allocator should be embedded into the context. You could even
  create separate contexts if you wish to have different allocators.
 
 Exactly.  That eliminates the global variable nicely.  But you've have to
 add a context parameter to the zmq_msg_t functions.
 
  3. There are at least two use cases here. AFAIU, Douglas want to
  custom-allocate the message bodies while Chunk want to custom-allocate
  everything. We should think a bit more about whether one use case
  subsumes the other or whether they are two separate use cases with two
  separate solutions.
  
  Maybe you could provide two args to the allocator when allocating memory:
  the size in bytes and a hint. This hint could be used by the allocator
  to decide upon it: use different memory pools for different parts of the
  program, etc. Then, you could use the allocator for everything in ZeroMQ
  itself and in programs using ZeroMQ, assuming ZeroMQ will have a default
  implementation of the allocator.
  
  If we go this route, it could also be the chance to use a couple of
  macros for interfacing with the allocator:
  
  ZMQ_ALLOC(size, hint);
  ZMQ_REALLOC(old_size, new_size, hint);
  ZMQ_FREE(size, hint);
  
  The point is that the macros can (maybe only in debugging mode) decorate
  the call by passing the file name and line number to the allocator. I
  have found this invaluable when debugging memory problems.
  
  Unluckily, it has been too long since the last time I did something like
  this; I would have to force myself to get back up to speed in order to
  contribute with more than simple ideas.
 
 The basic API in my previous patch would be a good starting point for this
 — it defined those macros, like you suggest.  You provide a custom
 allocator as a single function:
 
   void * (*alloc_func)(void *user_data, void *old_ptr, size_t old_size,
 size_t new_size);
 
 Depending on which parameters you pass in, this mimics malloc, realloc, and
 free.  This API is the same one they use in the Lua interpreter, and I
 implemented the same thing in the Avro C bindings, too.  It seems to work
 pretty well.

+1,  I have also used the same allocator interface for one of my private 
projects.

 
 The main issue with this API is that you have to keep track of the
 allocated size of everything you allocate.  For most fixed-size types,
 that's not an issue, since you can just use sizeof when you free, just
 like when you malloc.  For variable-sized buffers, though, you usually
 need an extra field to store the allocated size.

For variable-sized buffers/array you normally need to keep the size anyways.

 Another issue is that the zmq_msg_t API is all pure C, whereas most of the
 rest of the library is C++, so we'll have to overload the definitions of
 new and delete to really make this pervasive.
 
 I'm not sure about the hint parameter you mention — can you explain what
 you mean by that?  I had a user_data parameter in the allocation function,
 but this was passed in by the user of the library.  Your macros look like
 the hint would come from within 0mq...so would we have some well-defined
 hints that tell the allocator which part of the library the memory is for?

A hint parameter could be used to tell the allocator that the memory block 
might need to grow later, or is fixed size and will never grow.  For fixed 
sized block the allocator can allocate the memory from a packed region, for 
grow-able memory blocks the allocator can try to leave some space after the 
block for fast resizing.  You may also want to mark some memory as temporary 
blocks that will be freed very soon.

The hint parameter could also identify allocations for message blocks vs. 
other allocations from 0MQ.

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Important: backward incompatible changes for 0MQ/3.0!

2011-03-22 Thread Robert G. Jakabosky
On Tuesday 22, Pieter Hintjens wrote:
 On Wed, Mar 23, 2011 at 12:08 AM, MinRK benjami...@gmail.com wrote:
  I use PAIR quite a bit, because many of my small cases really are
  symmetric a=b connections (not REQ/REP pattern).  Frankly, I can
  easily use XREQ for both sides if PAIR is gone, and it works as long
  as additional connections don't happen, but if they do things will go
  wrong.  I'd rather have the error raised by PAIR than weird message
  loss that would result from XREQ.  What would be the recommended
  socket type(s) for a symmetric pair of sockets with flexible send/recv
  pattern if PAIR is removed?
 
 It's unlikely PAIR will be removed if there's proof of active use.
 
 However, you raise an interesting point. Perhaps it's possible to get
 the same results without having a distinct socket type.
 
 For example, DEALER to DEALER (xreq/xreq) with a restriction of 1
 connection per socket.

How about adding support for only accepting connections with a fixed set of 
identities? or require all connecting identities to have a common prefix.

 Martin S. has already discussed adding this ability to limit
 connections on a socket.


-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HTTP Protocol is Supported

2011-03-19 Thread Robert G. Jakabosky
On Friday 18, Вячеслав Мурашкин wrote:
 Here are changes to support HTTP protocol while transferring messages throw
 the TCP socket.
 
 https://github.com/zeromq/zeromq2-1/pull/12
 
 I've added http_socket_t class which is derived from tcp_socket_t.
 For now zmq_engine_t creates desired socket instance depending on
 options_t::proto value.
 
 Class http_socket_t simply adds HTTP header before message transfer and
 removes header from incoming data.
 
 Please fill free to ask any questions.

Both sides of a connection are sending messages as HTTP POST requests, neither 
side is responding with a valid HTTP response.  Also the HTTP headers are 
terminated with two blank lines instead of one.

Wouldn't it be easier to create a 0MQ-HTTP gateway device?

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] HTTP - 0MQ XREQ proxy example.

2011-03-19 Thread Robert G. Jakabosky
Hi all,

I have created a simple HTTP to 0MQ proxy [1] written in 150 lines of Lua 
code.  It uses the async. HTTP server  0MQ socket handlers from my lua-
handlers [2] project.

When a HTTP POST request is received it allocates a unique request id and 
sends the POST data to a XREQ socket with a message envelope so that it can 
route the responses from the XREQ socket back to the correct HTTP client.  It 
would be very easy to add the HTTP request/response headers to the 0MQ 
messages by adding a JSON part to the messages.

To try it out run the proxy with:
lua examples/http_zmq_proxy.lua tcp://localhost:1080/ tcp://127.0.0.1:

then start a backend XREP server on port :
lua examples/zmq_xrep_server.lua

then send a simple POST request:
curl -d test 1 http://localhost:1080/;

1. https://github.com/Neopallium/lua-
handlers/blob/master/examples/http_zmq_proxy.lua
2. https://github.com/Neopallium/lua-handlers

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HTTP - 0MQ XREQ proxy example.

2011-03-19 Thread Robert G. Jakabosky
On Saturday 19, Pieter Hintjens wrote:
 Robert,
 
 This is neat. It's something lots of people have discussed making but
 no-one's actually done.
 
 Presumably this gives us security over HTTPS, and makes 0MQ accessible
 to JavaScript clients.

The HTTP server in lua-handlers supports HTTPS connections:
lua examples/http_zmq_proxy.lua 
tls://localhost:443/key=private_key.pemcert=public_certificate.pem
 tcp://127.0.0.1:

The proxy only listens on one socket but you can make it bind it to many 
sockets.

 One thing I'm wondering, if we can make a formal specification for
 what 0MQ-over-HTTP (and HTTP-over-0MQ) looks like, so that different
 implementations interoperate.

Yes, that would be good.  I would be interested in the use-cases other people 
who need this type of interface have.

 -Pieter
 
 On Sat, Mar 19, 2011 at 12:37 PM, Robert G. Jakabosky
 
 bo...@sharedrealm.com wrote:
  Hi all,
  
  I have created a simple HTTP to 0MQ proxy [1] written in 150 lines of Lua
  code.  It uses the async. HTTP server  0MQ socket handlers from my lua-
  handlers [2] project.
  
  When a HTTP POST request is received it allocates a unique request id and
  sends the POST data to a XREQ socket with a message envelope so that it
  can route the responses from the XREQ socket back to the correct HTTP
  client.  It would be very easy to add the HTTP request/response headers
  to the 0MQ messages by adding a JSON part to the messages.
  
  To try it out run the proxy with:
  lua examples/http_zmq_proxy.lua tcp://localhost:1080/
  tcp://127.0.0.1:
  
  then start a backend XREP server on port :
  lua examples/zmq_xrep_server.lua
  
  then send a simple POST request:
  curl -d test 1 http://localhost:1080/;
  
  1. https://github.com/Neopallium/lua-
  handlers/blob/master/examples/http_zmq_proxy.lua
  2. https://github.com/Neopallium/lua-handlers
  
  --
  Robert G. Jakabosky
  ___
  zeromq-dev mailing list
  zeromq-dev@lists.zeromq.org
  http://lists.zeromq.org/mailman/listinfo/zeromq-dev
 
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev


-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] New faster 0MQ bindings for Lua LuaJIT2

2011-02-26 Thread Robert G. Jakabosky
On Saturday 26, Pieter Hintjens wrote:
 Robert, 6M msg/second from Lua, up from 1.3M/sec! Very nice indeed.
 
 Feel free to update http://www.zeromq.org/bindings:lua.

done.

 I'd advise you also to put these performance results into your README,
 so it's clear to visitors to the project why they want it.

and done.

 -Pieter
 
 On Sat, Feb 26, 2011 at 9:45 AM, Robert G. Jakabosky
 
 bo...@sharedrealm.com wrote:
  Here has been a lot of talk on the Lua mailing list about a new feature
  in LuaJIT2 called FFI (foreign function interface).  The new FFI feature
  greatly improves the performance of Lua code when running under LuaJIT2,
  but it doesn't work under the standard Lua VM.  So I have create a
  hybrid Lua module [1] that has normal C bindings and FFI-based bindings
  for 0MQ.
  
  Also the new bindings have support for sending/receving messages using
  the zmq_msg_t structure.  This improves the performance even more under
  LuaJIT2 and gets the throughput to almost equal that of the C++
  benchmark.
  
  Throughput benchmark using the tcp transport over localhost:
  message size: 30 [B]
  message count: 1
  
  Orignal Lua bindings running under Lua 5.1.4:
  mean throughput: 1395864 [msg/s]
  mean throughput: 335.007 [Mb/s]
  
  New bindings running under Lua 5.1.4:
  mean throughput: 1577407 [msg/s]
  mean throughput: 378.578 [Mb/s]
  
  Orignal Lua bindings running under LuaJIT2 (git HEAD):
  mean throughput: 2516850 [msg/s]
  mean throughput: 604.044 [Mb/s]
  
  New bindings running under LuaJIT2 (git HEAD):
  mean throughput: 5112158 [msg/s]
  mean throughput: 1226.918 [Mb/s]
  
  New bindings using send_msg/recv_msg functions running under LuaJIT2 (git
  HEAD):
  mean throughput: 6160911 [msg/s]
  mean throughput: 1478.619 [Mb/s]
  
  C++ code:
  mean throughput: 6241452 [msg/s]
  mean throughput: 1497.948 [Mb/s]
  
  
  1. https://github.com/Neopallium/lua-zmq
  
  --
  Robert G. Jakabosky
  ___
  zeromq-dev mailing list
  zeromq-dev@lists.zeromq.org
  http://lists.zeromq.org/mailman/listinfo/zeromq-dev
 
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev


-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] New faster 0MQ bindings for Lua LuaJIT2

2011-02-26 Thread Robert G. Jakabosky
On Saturday 26, Martin Sustrik wrote:
 Hi Robert,
 
  Feel free to update http://www.zeromq.org/bindings:lua.
  
  done.
 
 As far as I understand you have overwritten the original Lua binding.
 Given that the new binding doesn't work with all Lua VMs it's definitely
 not a good idea.

The new bindings do work on all Lua VMs that support that standard Lua C API.  
When loaded a Lua VM other then LuaJIT2 (i.e. using Lua 5.1.x or LuaJIT 1.2.x) 
it will fallback to using the standard Lua C API (just like the old bindings).  
There isn't even any linking issues since the C code of the new bindings uses 
only the standard Lua C API (The FFI bindings are pure Lua code).

 I would suggest either keeping both bindings on the page or creating a
 separate page, say 'lua-ffi'.

I had though about creating a new page like that, but the new bindings do not 
require FFI support so I don't want people thinking that they only work with 
FFI.

I can create a new bindings page if you want.  Not sure how best to structure 
a page with both the old and new bindings on one page.

For now I will just move my bindings over to 'lua-ffi'.

 Martin
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev


-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] New faster 0MQ bindings for Lua LuaJIT2

2011-02-26 Thread Robert G. Jakabosky
On Saturday 26, Pieter Hintjens wrote:
 On Sat, Feb 26, 2011 at 11:35 AM, Robert G. Jakabosky
 
 bo...@sharedrealm.com wrote:
  For now I will just move my bindings over to 'lua-ffi'.
 
 No, please don't do that. We have other languages such as Ruby that
 also have FFI bindings. We have languages like C# with three or more
 bindings. It would become messy to make a page for each variation.
 
 As long as there are users of a particular binding it should be
 documented. You can then work with the author of the older binding to
 merge the two and deprecate the older one, and give users consensus
 about upgrading.

Yeah, I didn't mean to step on anyone’s toes.  I have tried to keep the same 
interface in the new bindings.

The only API change that I can think of is splitting the zmq.init() function 
into two init(io_threads) and init_ctx(userdata).  They could be merge back 
together, my reason for splitting them might not really be valid.  The other 
changes are additions of new functions.

 In general people using a binding in production will not trust new
 code for a while.

Yes, that is a good reason not switch to the new version right away.

 So I've put both bindings onto the page, please fix the titles if
 they're inaccurate.

I see that thanks.  I will try to add some details about how the new bindings 
still work without FFI support.

 Cheers
 Pieter
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev


-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] non-blocking zmq sockets with lua-ev lua-zmq

2010-12-24 Thread Robert G. Jakabosky
Hi everyone,

I would like to let you know about a project [1] I just created that makes it 
easy to use zmq sockets in non-blocking mode from Lua scripts.  Take a look 
at the test_*.lua scripts to see examples of using each type of zmq socket.

It is built on top of lua-ev  lua-zmq(requires ZeroMQ 2.1.0).

1. https://github.com/Neopallium/lua-handlers

-- 
Robert G. Jakabosky
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev