Re: [zeromq-dev] HWM behaviour blocking

2012-07-02 Thread Justin Karneges
On Friday, June 29, 2012 06:13:53 AM Paul Colomiets wrote:
 On Thu, Jun 28, 2012 at 9:06 PM, Justin Karneges jus...@affinix.com wrote:
  It's really just for functional completeness of my event-driven wrapper.
  The only time I can see this coming up in practice is an application
  that pushes a message just before exiting.
  
  For now, I set ZMQ_LINGER to 0 when a socket object is destroyed, making
  the above application impossible to create. What I'm thinking of doing
  now is offering an alternate, blocking-based shutdown method. This would
  violate the spirit of my wrapper, but may work well enough for apps that
  finish with a single socket doing a write-and-exit.
 
 I think you should just set linger and use it. zmq_close() doesn't
 block. The zmq_term() blocks.

Wow, silly me working around a non-problem. I was assuming zmq_close() 
blocked. Thanks for clarifying.

Justin
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-06-29 Thread Paul Colomiets
Hi Justin,

On Thu, Jun 28, 2012 at 9:06 PM, Justin Karneges jus...@affinix.com wrote:
 It's really just for functional completeness of my event-driven wrapper. The
 only time I can see this coming up in practice is an application that pushes a
 message just before exiting.

 For now, I set ZMQ_LINGER to 0 when a socket object is destroyed, making the
 above application impossible to create. What I'm thinking of doing now is
 offering an alternate, blocking-based shutdown method. This would violate the
 spirit of my wrapper, but may work well enough for apps that finish with a
 single socket doing a write-and-exit.


I think you should just set linger and use it. zmq_close() doesn't
block. The zmq_term() blocks. And usually starting an application has
much bigger overhead than sending a message. So in the case of
starting application, doing request(send) and shutting down, this
delay is probably negligible (unless your data is too big and/or
network is overloaded).

-- 
Paul
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-06-28 Thread Paul Colomiets
Hi Justin,

On Thu, Jun 28, 2012 at 8:50 AM, Justin Karneges jus...@affinix.com wrote:
 On Thursday, May 10, 2012 01:53:48 PM Pieter Hintjens wrote:
 On Thu, May 10, 2012 at 3:44 PM, Paul Colomiets p...@colomiets.name wrote:
  Can you be more specific, why setting HWM to 1 is a bad thing? Do you
  mean, that it smells bad to set HWM to 1 for reliability? Or do you
  think that setting it will have other consequences? (low performance?)

 it's bad because you're trying to force a synchronous model on an
 asynchronous system, and doing it at the wrong level. If you really
 want synchronization you MUST get some upstream data from the
 receiver. Just throttling the sender cannot work reliably.

 I'm about to set HWM to 1 and I recalled a thread about this so I've looked it
 up. Totally agree about what's been said so far. The reason I want to do this
 is because I need a way for an event-driven application to determine if data
 has been written to the underlying kernel. This is useful in case the
 application wants to close the socket immediately after writing data. In a
 traditional blocking application, this is easy: just call zmq_close() and
 it'll unblock when done. However, in an event-driven application, the only way
 I can think of imitating this functionality is by setting HWM to 1 and waiting
 until ZMQ_EVENTS indicates writability, then calling zmq_close().


Why you need zmq_close in the asynchronous application in the first
place? Is your application very connection hungry? We never close
zeromq sockets even on fairly low traffic connections, and it works
nice.

-- 
Paul
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-06-28 Thread Justin Karneges
On Thursday, June 28, 2012 03:50:57 AM Paul Colomiets wrote:
 Hi Justin,
 
 On Thu, Jun 28, 2012 at 8:50 AM, Justin Karneges jus...@affinix.com wrote:
  On Thursday, May 10, 2012 01:53:48 PM Pieter Hintjens wrote:
  On Thu, May 10, 2012 at 3:44 PM, Paul Colomiets p...@colomiets.name 
wrote:
   Can you be more specific, why setting HWM to 1 is a bad thing? Do you
   mean, that it smells bad to set HWM to 1 for reliability? Or do you
   think that setting it will have other consequences? (low performance?)
  
  it's bad because you're trying to force a synchronous model on an
  asynchronous system, and doing it at the wrong level. If you really
  want synchronization you MUST get some upstream data from the
  receiver. Just throttling the sender cannot work reliably.
  
  I'm about to set HWM to 1 and I recalled a thread about this so I've
  looked it up. Totally agree about what's been said so far. The reason I
  want to do this is because I need a way for an event-driven application
  to determine if data has been written to the underlying kernel. This is
  useful in case the application wants to close the socket immediately
  after writing data. In a traditional blocking application, this is easy:
  just call zmq_close() and it'll unblock when done. However, in an
  event-driven application, the only way I can think of imitating this
  functionality is by setting HWM to 1 and waiting until ZMQ_EVENTS
  indicates writability, then calling zmq_close().
 
 Why you need zmq_close in the asynchronous application in the first
 place? Is your application very connection hungry? We never close
 zeromq sockets even on fairly low traffic connections, and it works
 nice.

It's really just for functional completeness of my event-driven wrapper. The 
only time I can see this coming up in practice is an application that pushes a 
message just before exiting.

For now, I set ZMQ_LINGER to 0 when a socket object is destroyed, making the 
above application impossible to create. What I'm thinking of doing now is 
offering an alternate, blocking-based shutdown method. This would violate the 
spirit of my wrapper, but may work well enough for apps that finish with a 
single socket doing a write-and-exit.

Justin
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-06-27 Thread Justin Karneges
On Thursday, May 10, 2012 01:53:48 PM Pieter Hintjens wrote:
 On Thu, May 10, 2012 at 3:44 PM, Paul Colomiets p...@colomiets.name wrote:
  Can you be more specific, why setting HWM to 1 is a bad thing? Do you
  mean, that it smells bad to set HWM to 1 for reliability? Or do you
  think that setting it will have other consequences? (low performance?)
 
 it's bad because you're trying to force a synchronous model on an
 asynchronous system, and doing it at the wrong level. If you really
 want synchronization you MUST get some upstream data from the
 receiver. Just throttling the sender cannot work reliably.

I'm about to set HWM to 1 and I recalled a thread about this so I've looked it 
up. Totally agree about what's been said so far. The reason I want to do this 
is because I need a way for an event-driven application to determine if data 
has been written to the underlying kernel. This is useful in case the 
application wants to close the socket immediately after writing data. In a 
traditional blocking application, this is easy: just call zmq_close() and 
it'll unblock when done. However, in an event-driven application, the only way 
I can think of imitating this functionality is by setting HWM to 1 and waiting 
until ZMQ_EVENTS indicates writability, then calling zmq_close().

Justin
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-15 Thread Viet Hoang (Quant Edge)
Peter,

I am using mdclient2 (DEALER/ROUTER) and it is excellent, I don't have to 
manage IP, session,... However without ACK from the worker, the sender wouldn't 
know if message is received or lost during transit. I want to deploy pattern 
over the internet, therefore deliver confirmation is important.

My implementation requires worker to ACK for each message it received. The 
sender waits to receive ACK before sending the next message. I want to improve 
the performance by sending a batch of messages, then check ACKs, if any ACK is 
missed, resend missing messages.

Cheers,

Viet

Viet Hoang | Chief Executive | Quant Edge Corp

mobile: +849 8822 3399
email: viet.ho...@quant-edge.com
skype: viet.hoang.qe

Floor 1A, Viet Hai Building, Block C2H
Duy Tan Street, Cau Giay Dist
Hanoi, Vietnam

tel: +84 (4) 8587 6467
fax: +84 (4) 3795 9142
web: www.quant-edge.com

-

This message (including attachments, if any) is confidential, may be privileged 
and is intended for the above-named recipient(s) only. If you have received 
this message in error, please notify me by return email and delete this message 
from your system. Any unauthorized use or disclosure of this message is 
strictly prohibited.

Quant Edge Corp assumes no responsibility for errors, inaccuracies or omissions 
in these materials, and does not warrant the accuracy or completeness of the 
information contained within these materials. Quant Edge Corp shall not be 
liable for any special, indirect, incidental, or consequential damages, 
including without limitation losses, lost revenues, or lost profits that may 
result from these materials.



On May14, 2012, at 11:11 PM, Pieter Hintjens wrote:

 Viet,
 
 There is actually a variant of the MDP client that works
 asynchronously. You still want workers to be synchronous, but clients
 can stream multiple requests and get replies. See the mdpclient2
 example.
 
 -Pieter
 
 On Mon, May 14, 2012 at 10:36 AM, Viet Hoang (Quant Edge)
 viet.ho...@quant-edge.com wrote:
 The Major Domo pattern may suits this well. Receiver to ACK for each
 message. If you don't receive, just resend. However you sacrifice the beauty
 of ZeroMQ - SPEED. We applied Major Domo into our demo platform and so far
 so good (better than our old C# raw socket implementation). What we will do
 is to have async ACK to improve performance.
 
 
 On May 11, 2012, at 6:44 AM, Michel Pelletier wrote:
 
 On Thu, May 10, 2012 at 1:53 PM, Pieter Hintjens p...@imatix.com wrote:
 
 On Thu, May 10, 2012 at 3:44 PM, Paul Colomiets p...@colomiets.name wrote:
 
 
 Can you be more specific, why setting HWM to 1 is a bad thing? Do you
 
 mean, that it smells bad to set HWM to 1 for reliability? Or do you
 
 think that setting it will have other consequences? (low performance?)
 
 
 it's bad because you're trying to force a synchronous model on an
 
 asynchronous system, and doing it at the wrong level. If you really
 
 want synchronization you MUST get some upstream data from the
 
 receiver. Just throttling the sender cannot work reliably.
 
 
 Agreed.  Here's my take on what trips a lot of people up with 0mq:  we
 are used to controlling how and when something is sent at the point
 that we call send(), or at least knowing in advance what will happen
 if we try, but in an async model you have to let that go.  send() is
 going to return immediately (if you haven't hit a blocking case) and
 your message is now on its own, free as a bird, to live in various
 queues and buffers before it ends up at its destination.  You have no
 control or visibility of its fate after you send it unless your
 receiver acknowledges it, or acknowledges it didn't receive it after a
 period of time (nack).
 
 The blocking case isn't really an exception, you sent when your
 application wasn't ready to receive, either because your buffers were
 full or your receivers weren't ready.  Senders and receivers should
 synchronize this application level state with each other, possibly via
 some out-of-band channel, either by indicating they are ready, or
 connected, or that they are busy and can't do anymore, or by
 exchanging some kind of flow control information so that the sender
 doesn't fill the buffers because the receiver can't keep up.
 
 To use an analogy, all 0mq provides are the pipes.  The pipes can't
 tell you that the tap is running and the sink is overflowing or the
 drain is clogged.  If you want to have a reservoir at some point to
 regulate flow, an inline device can store a certain capacity of
 messages.  If your pipe is delivering to a downstream reservoir which
 is near full capacity, someone at the downstream end needs to pickup
 the phone (yet another pipe of sorts) and tell the upstream to turn
 down the flow.  If that doesn't happen, the reservoir is full, and the
 flow stops (blocked) or maybe spills over (discards) depending on the
 design of the *application*, not the pipe.  In either case it's not
 the pipe's fault, it did its job, it 

Re: [zeromq-dev] HWM behaviour blocking

2012-05-15 Thread Pieter Hintjens
On Tue, May 15, 2012 at 2:20 AM, Viet Hoang (Quant Edge)
viet.ho...@quant-edge.com wrote:

 My implementation requires worker to ACK for each message it received. The
 sender waits to receive ACK before sending the next message. I want to
 improve the performance by sending a batch of messages, then check ACKs, if
 any ACK is missed, resend missing messages.

If you want to send a batch to a single worker, then a batch is just a
larger message. I.e. you can do this fully at the application level.
If you want to break a batch over multiple workers, then you would
have to modify the MDP protocol.

-Pieter
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-14 Thread Viet Hoang (Quant Edge)
The Major Domo pattern may suits this well. Receiver to ACK for each message. 
If you don't receive, just resend. However you sacrifice the beauty of ZeroMQ - 
SPEED. We applied Major Domo into our demo platform and so far so good (better 
than our old C# raw socket implementation). What we will do is to have async 
ACK to improve performance.


On May 11, 2012, at 6:44 AM, Michel Pelletier wrote:

 On Thu, May 10, 2012 at 1:53 PM, Pieter Hintjens p...@imatix.com wrote:
 On Thu, May 10, 2012 at 3:44 PM, Paul Colomiets p...@colomiets.name wrote:
 
 Can you be more specific, why setting HWM to 1 is a bad thing? Do you
 mean, that it smells bad to set HWM to 1 for reliability? Or do you
 think that setting it will have other consequences? (low performance?)
 
 it's bad because you're trying to force a synchronous model on an
 asynchronous system, and doing it at the wrong level. If you really
 want synchronization you MUST get some upstream data from the
 receiver. Just throttling the sender cannot work reliably.
 
 Agreed.  Here's my take on what trips a lot of people up with 0mq:  we
 are used to controlling how and when something is sent at the point
 that we call send(), or at least knowing in advance what will happen
 if we try, but in an async model you have to let that go.  send() is
 going to return immediately (if you haven't hit a blocking case) and
 your message is now on its own, free as a bird, to live in various
 queues and buffers before it ends up at its destination.  You have no
 control or visibility of its fate after you send it unless your
 receiver acknowledges it, or acknowledges it didn't receive it after a
 period of time (nack).
 
 The blocking case isn't really an exception, you sent when your
 application wasn't ready to receive, either because your buffers were
 full or your receivers weren't ready.  Senders and receivers should
 synchronize this application level state with each other, possibly via
 some out-of-band channel, either by indicating they are ready, or
 connected, or that they are busy and can't do anymore, or by
 exchanging some kind of flow control information so that the sender
 doesn't fill the buffers because the receiver can't keep up.
 
 To use an analogy, all 0mq provides are the pipes.  The pipes can't
 tell you that the tap is running and the sink is overflowing or the
 drain is clogged.  If you want to have a reservoir at some point to
 regulate flow, an inline device can store a certain capacity of
 messages.  If your pipe is delivering to a downstream reservoir which
 is near full capacity, someone at the downstream end needs to pickup
 the phone (yet another pipe of sorts) and tell the upstream to turn
 down the flow.  If that doesn't happen, the reservoir is full, and the
 flow stops (blocked) or maybe spills over (discards) depending on the
 design of the *application*, not the pipe.  In either case it's not
 the pipe's fault, it did its job, it can't solve your design problems
 any more than a pipe can become a city water system all by itself.
 Adding all these application level flow semantics to the pipe is not
 right, and would horribly complicate the library.
 
 -Michel
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev

___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-14 Thread Pieter Hintjens
Viet,

There is actually a variant of the MDP client that works
asynchronously. You still want workers to be synchronous, but clients
can stream multiple requests and get replies. See the mdpclient2
example.

-Pieter

On Mon, May 14, 2012 at 10:36 AM, Viet Hoang (Quant Edge)
viet.ho...@quant-edge.com wrote:
 The Major Domo pattern may suits this well. Receiver to ACK for each
 message. If you don't receive, just resend. However you sacrifice the beauty
 of ZeroMQ - SPEED. We applied Major Domo into our demo platform and so far
 so good (better than our old C# raw socket implementation). What we will do
 is to have async ACK to improve performance.


 On May 11, 2012, at 6:44 AM, Michel Pelletier wrote:

 On Thu, May 10, 2012 at 1:53 PM, Pieter Hintjens p...@imatix.com wrote:

 On Thu, May 10, 2012 at 3:44 PM, Paul Colomiets p...@colomiets.name wrote:


 Can you be more specific, why setting HWM to 1 is a bad thing? Do you

 mean, that it smells bad to set HWM to 1 for reliability? Or do you

 think that setting it will have other consequences? (low performance?)


 it's bad because you're trying to force a synchronous model on an

 asynchronous system, and doing it at the wrong level. If you really

 want synchronization you MUST get some upstream data from the

 receiver. Just throttling the sender cannot work reliably.


 Agreed.  Here's my take on what trips a lot of people up with 0mq:  we
 are used to controlling how and when something is sent at the point
 that we call send(), or at least knowing in advance what will happen
 if we try, but in an async model you have to let that go.  send() is
 going to return immediately (if you haven't hit a blocking case) and
 your message is now on its own, free as a bird, to live in various
 queues and buffers before it ends up at its destination.  You have no
 control or visibility of its fate after you send it unless your
 receiver acknowledges it, or acknowledges it didn't receive it after a
 period of time (nack).

 The blocking case isn't really an exception, you sent when your
 application wasn't ready to receive, either because your buffers were
 full or your receivers weren't ready.  Senders and receivers should
 synchronize this application level state with each other, possibly via
 some out-of-band channel, either by indicating they are ready, or
 connected, or that they are busy and can't do anymore, or by
 exchanging some kind of flow control information so that the sender
 doesn't fill the buffers because the receiver can't keep up.

 To use an analogy, all 0mq provides are the pipes.  The pipes can't
 tell you that the tap is running and the sink is overflowing or the
 drain is clogged.  If you want to have a reservoir at some point to
 regulate flow, an inline device can store a certain capacity of
 messages.  If your pipe is delivering to a downstream reservoir which
 is near full capacity, someone at the downstream end needs to pickup
 the phone (yet another pipe of sorts) and tell the upstream to turn
 down the flow.  If that doesn't happen, the reservoir is full, and the
 flow stops (blocked) or maybe spills over (discards) depending on the
 design of the *application*, not the pipe.  In either case it's not
 the pipe's fault, it did its job, it can't solve your design problems
 any more than a pipe can become a city water system all by itself.
 Adding all these application level flow semantics to the pipe is not
 right, and would horribly complicate the library.

 -Michel
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev



 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev

___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-10 Thread Pieter Hintjens
On Thu, May 10, 2012 at 7:42 AM, Steffen Mueller
zer...@steffen-mueller.net wrote:

 Frankly, I fail to see what's so funky about not wanting to block the sender
 if the receiver is restarted, using the queuing of the library[1].

 Either way, I understand the message (and Chuck's, too). I'll roll my own
 solution.

You seem annoyed that ZeroMQ somehow does not live up to your
expectations. Yet the patterns which ZeroMQ enables are well
documented, and the material that explains how to build on top is
vast, and translated into dozens of programming languages.

I've no strong opinion on this, but you might reflect on what it looks
like to others. You find a free library, made by others over years at
their expense. You do not read the available material (or you skim
it). You ask for, and get expert advice, for free. Then you complain
that the tool doesn't fit your personal use case as though the
universe was designed for you?

Steffen, seriously? Learn it, use it, and if you can improve it, send
us a patch.

-Pieter
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-10 Thread Steffen Mueller
Hi Pieter,

On 05/10/2012 02:36 PM, Pieter Hintjens wrote:
 On Thu, May 10, 2012 at 7:42 AM, Steffen Mueller
 zer...@steffen-mueller.net  wrote:

 Frankly, I fail to see what's so funky about not wanting to block the sender
 if the receiver is restarted, using the queuing of the library[1].

[snip, explanation of why I was looking for the functionality]

 Either way, I understand the message (and Chuck's, too). I'll roll my own
 solution.

 You seem annoyed that ZeroMQ somehow does not live up to your
 expectations. Yet the patterns which ZeroMQ enables are well
 documented, and the material that explains how to build on top is
 vast, and translated into dozens of programming languages.

Oh, I wasn't annoyed at the library. If my tone was sour, I apologize!

I think at the same time, I wasn't clear enough on what I meant by the 
message and roll my own. I meant that the message had been 0MQ is 
not meant to do that and won't and the conclusion was that I need to 
roll my own on top of 0MQ, not that I intended to ditch 0MQ and start 
from TCP.

 I've no strong opinion on this, but you might reflect on what it looks
 like to others. You find a free library, made by others over years at
 their expense. You do not read the available material (or you skim
 it). You ask for, and get expert advice, for free. Then you complain
 that the tool doesn't fit your personal use case as though the
 universe was designed for you?

 Steffen, seriously? Learn it, use it, and if you can improve it, send
 us a patch.

Don't be condescending. I've written and supported a lot of free 
software myself. That's not to say anything would entitle me to act like 
a dick. Again, if that was the impression I created, I'm really sorry. 
This logic goes both ways.

--Steffen
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-10 Thread Pieter Hintjens
On Thu, May 10, 2012 at 11:14 AM, Steffen Mueller
zer...@steffen-mueller.net wrote:

 Oh, I wasn't annoyed at the library. If my tone was sour, I apologize!

Yes, it seemed like that. No harm done. My apologies for jumping to assumptions.

The functionality you want is (IMO because it takes time to fully dive
into peoples' use cases, and time is precious) high level and depends
on two-way communication between peers, heartbeating, and some form of
flow control (credit based or other). It is much, much more than the
core library can attempt to do, and still aim to be a generic high
performance transport layer. HWMs are no more useful for this than are
the buffers in your router.

This is most definitely exotic routing, and should be relatively
easy to roll your own, but do first study the routing and reliability
examples in the Guide in detail, because most likely as you do, your
view of the problem will change quite profoundly.

-Pieter
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-10 Thread Michel Pelletier
On Thu, May 10, 2012 at 9:56 AM, Pieter Hintjens p...@imatix.com wrote:
 On Thu, May 10, 2012 at 11:14 AM, Steffen Mueller
 zer...@steffen-mueller.net wrote:


 This is most definitely exotic routing, and should be relatively
 easy to roll your own, but do first study the routing and reliability
 examples in the Guide in detail, because most likely as you do, your
 view of the problem will change quite profoundly.

This viewpoint change is something I think a lot of people who use 0mq
come to have.  In almost every step in learning to use this library I
have approached a problem with a solution in mind, and come away with
something different and better.  My personal problem is that like most
developers I have a huge ego, and when confronted with a distributed
engineering problems say to myself Oh, I got this, I'm an expert,
this is trivial, all I need is X, Y, and Z but then I come to realize
that I don't know how to solve the problem, I'm not an expert in
distributed computing (few of us are) I don't got it, and that's
when I go read the guide again.

I have had a hard time myself convincing some of my local colleagues
that 0mq is the exactly perfect tool for the job it does, not
necessarily the job they have in mind.  They want a perfect world
where an asynchronous system satisfies their synchronous desires.
They want to control, and be able to know, the state and location of
every message and connected peer at any one time.  As if that were
possible!  These are the types that tend to set HWM to 1.  It's hard
to convince them that this is the wrong approach and that the right
approach is to have peers communicate their states and message acks to
each other and that you have to deal with the cases where things fall
down.

It's a tough social and documentation problem and I'm not sure how to
go about fixing it.  But for now, I'm happy to have 0mq as my secret
weapon.

-Michel
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-10 Thread Pieter Hintjens
On Thu, May 10, 2012 at 12:23 PM, Michel Pelletier
pelletier.mic...@gmail.com wrote:

 It's a tough social and documentation problem and I'm not sure how to
 go about fixing it.  But for now, I'm happy to have 0mq as my secret
 weapon.

I think simply expressing the problem is a good start to solving it.

This hits us often enough that it's worth stating explicitly somewhere
up-front. So I've added a paragraph on the Solve a Problem page at
http://www.zeromq.org/intro:ask-for-help.

See if that helps...

-Pieter
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-10 Thread Pieter Hintjens
On Thu, May 10, 2012 at 3:44 PM, Paul Colomiets p...@colomiets.name wrote:

 Can you be more specific, why setting HWM to 1 is a bad thing? Do you
 mean, that it smells bad to set HWM to 1 for reliability? Or do you
 think that setting it will have other consequences? (low performance?)

it's bad because you're trying to force a synchronous model on an
asynchronous system, and doing it at the wrong level. If you really
want synchronization you MUST get some upstream data from the
receiver. Just throttling the sender cannot work reliably.

-Pieter
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-09 Thread Chuck Remes
On May 9, 2012, at 12:45 AM, Steffen Mueller wrote:

 Dear all,
 
 I have an application of 0MQ where I have one or multiple server 
 instances that use a PUSH socket to send messages that must be processed 
 by any one of potentially many workers. The servers are single-threaded 
 apart from 0MQ's IO thread and this is hard to change.
 
 The documentation for the PUSH socket type (and others like XREQ) 
 explain that sending messages on such a socket will be a blocking 
 operation IF
 
  - the HWM is hit
  - OR there are no peers
 
 In my scenarios, I want to be resilient against intermittent client 
 failure (due to whatever -- coordinated restart, failure, ...). But for 
 this time, the server processes will block on the write and that is not 
 acceptable. Is it reasonable to hope for a way to achieve the following 
 behaviour?
 
 Sending a messages down a socket of this time will block if:
 
  - the HWM is hit for all peers (as per docs, presumably, this means 
 all and each separately since the buffers are probably per-client)
  - OR a single global HWM is hit if there are no peers
 
 IOW, I'd like to be able to shove up to $HWM messages down a pipe no 
 matter what, and have the PUSH/whatever socket use a single $HWM-depth 
 buffer if there is no peer. As soon as a peer connects, it could do one 
 of two things:
 
  - simply swap that buffer in to become the first connecting peer's 
 send queue (which might be undesirable in some cases since it doesn't 
 load balance but it's likely much easier to implement and more efficient)
  - use that buffer as another queue stage to load balance from
 
 Any chance I could have such a functionality? Of course, being able to 
 determine whether there are any peers connected would be great, too.

I recommend that you do non-blocking writes and check the return code. If 
zmq_errno is equal to EAGAIN, then you know that you have either hit HWM or 
that there are no peers.

cr

___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-09 Thread Steffen Mueller
Hi Chuck,

On 05/09/2012 03:57 PM, Chuck Remes wrote:
 On May 9, 2012, at 12:45 AM, Steffen Mueller wrote:
 I have an application of 0MQ where I have one or multiple server
 instances that use a PUSH socket to send messages that must be
 processed by any one of potentially many workers. The servers are
 single-threaded apart from 0MQ's IO thread and this is hard to
 change.

 The documentation for the PUSH socket type (and others like XREQ)
 explain that sending messages on such a socket will be a blocking
 operation IF

 - the HWM is hit - OR there are no peers

 In my scenarios, I want to be resilient against intermittent
 client failure (due to whatever -- coordinated restart, failure,
 ...). But for this time, the server processes will block on the
 write and that is not acceptable. Is it reasonable to hope for a
 way to achieve the following behaviour?

 Sending a messages down a socket of this time will block if:

 - the HWM is hit for all peers (as per docs, presumably, this
 means all and each separately since the buffers are probably
 per-client) - OR a single global HWM is hit if there are no peers

 IOW, I'd like to be able to shove up to $HWM messages down a pipe
 no matter what, and have the PUSH/whatever socket use a single
 $HWM-depth buffer if there is no peer. As soon as a peer connects,
 it could do one of two things:

 - simply swap that buffer in to become the first connecting peer's
 send queue (which might be undesirable in some cases since it
 doesn't load balance but it's likely much easier to implement and
 more efficient) - use that buffer as another queue stage to load
 balance from

 Any chance I could have such a functionality? Of course, being able
 to determine whether there are any peers connected would be great,
 too.

 I recommend that you do non-blocking writes and check the return
 code. If zmq_errno is equal to EAGAIN, then you know that you have
 either hit HWM or that there are no peers.

thanks for your advice.

Alas, I can't see how that would help me with the particular issue I 
have. In a nutshell, I want to have the HWM apply also while there's no 
listeners. Right now, I'd have to do non-blocking writes and implement 
my own buffering. In particular when you include multi-frame messages 
into the picture, that's just a lot of silly effort. Thus, I am asking 
whether having a mode of operation or socket type in 0MQ where 0MQ does 
buffering and applies HWM even without currently connected peers.

Furthermore, there are no peers or HWM hit doesn't help me AT ALL in 
this scenario since I am attempting to *distinguish* between the two 
scenarios. So even if I was to implement my own message queuing in the 
application (yikes, really?), I would essentially do double buffering if 
the HWM is hit since it's not distinguishable from having no peers. IOW, 
what I want is currently NOT POSSIBLE AT ALL with 0MQ unless I'm missing 
something.

--Steffen
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-09 Thread Chuck Remes

On May 9, 2012, at 11:29 AM, Steffen Mueller wrote:

 Hi Chuck,
 
 On 05/09/2012 03:57 PM, Chuck Remes wrote:
 On May 9, 2012, at 12:45 AM, Steffen Mueller wrote:
 I have an application of 0MQ where I have one or multiple server
 instances that use a PUSH socket to send messages that must be
 processed by any one of potentially many workers. The servers are
 single-threaded apart from 0MQ's IO thread and this is hard to
 change.
 
 The documentation for the PUSH socket type (and others like XREQ)
 explain that sending messages on such a socket will be a blocking
 operation IF
 
 - the HWM is hit - OR there are no peers
 
 In my scenarios, I want to be resilient against intermittent
 client failure (due to whatever -- coordinated restart, failure,
 ...). But for this time, the server processes will block on the
 write and that is not acceptable. Is it reasonable to hope for a
 way to achieve the following behaviour?
 
 Sending a messages down a socket of this time will block if:
 
 - the HWM is hit for all peers (as per docs, presumably, this
 means all and each separately since the buffers are probably
 per-client) - OR a single global HWM is hit if there are no peers
 
 IOW, I'd like to be able to shove up to $HWM messages down a pipe
 no matter what, and have the PUSH/whatever socket use a single
 $HWM-depth buffer if there is no peer. As soon as a peer connects,
 it could do one of two things:
 
 - simply swap that buffer in to become the first connecting peer's
 send queue (which might be undesirable in some cases since it
 doesn't load balance but it's likely much easier to implement and
 more efficient) - use that buffer as another queue stage to load
 balance from
 
 Any chance I could have such a functionality? Of course, being able
 to determine whether there are any peers connected would be great,
 too.
 
 I recommend that you do non-blocking writes and check the return
 code. If zmq_errno is equal to EAGAIN, then you know that you have
 either hit HWM or that there are no peers.
 
 thanks for your advice.
 
 Alas, I can't see how that would help me with the particular issue I have. In 
 a nutshell, I want to have the HWM apply also while there's no listeners. 
 Right now, I'd have to do non-blocking writes and implement my own buffering. 
 In particular when you include multi-frame messages into the picture, that's 
 just a lot of silly effort. Thus, I am asking whether having a mode of 
 operation or socket type in 0MQ where 0MQ does buffering and applies HWM even 
 without currently connected peers.

This isn't supported in 0mq. It's such an odd use-case that I doubt it ever 
would be.

 Furthermore, there are no peers or HWM hit doesn't help me AT ALL in this 
 scenario since I am attempting to *distinguish* between the two scenarios. So 
 even if I was to implement my own message queuing in the application (yikes, 
 really?), I would essentially do double buffering if the HWM is hit since 
 it's not distinguishable from having no peers. IOW, what I want is currently 
 NOT POSSIBLE AT ALL with 0MQ unless I'm missing something.

It doesn't help AT ALL? I think it does.

Let's take another stab at this. According to the FAQ (you have read it, 
right?), when a socket calls zmq_connect() it immediately establishes a local 
queue. When calling zmq_bind() it does *not* do this. So, there is some minor 
differences in behavior when connecting versus binding. 

So, if you call zmq_connect() on your socket and you have already set a HWM 
beforehand, then you can write to it and the messages will queue even if there 
are no peers. This sounds like what you want. You can still use the 
non-blocking write technique that I mentioned in my first response to determine 
when you have hit the HWM if you so desire. There is no way to tell if you have 
any peers in this (or any) scenario unless you devise a protocol for your 
clients and servers to use to announce themselves and whatnot. This is not 
built in to the library.

If this is still insufficient for your purposes, then I think what you want to 
accomplish is simply impossible. It's probably impossible to do in 0mq or in 
*any* other networking library.

Patches welcome.

cr

___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-09 Thread Pieter Hintjens
On Wed, May 9, 2012 at 7:45 AM, Steffen Mueller
zer...@steffen-mueller.net wrote:

 Any chance I could have such a functionality? Of course, being able to
 determine whether there are any peers connected would be great, too.

The standard answer to can I do funky routing model XYZ over 0MQ is
yes, you make it yourself using ROUTER sockets. This is why we
called them that. Particularly, the built-in HWM logic is difficult to
mix with routing and people who try this always end up frustrated.
Happily it is easy to layer on top.

It looks like you want a combination of reliable service-oriented
queue and credit-based flow control. Read the Guide section on
reliable request reply, in detail. Then read
http://unprotocols.org/blog:15. Mix this together, taste, improve,
repeat.

-Pieter
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM behaviour blocking

2012-05-09 Thread Steffen Mueller
On 05/10/2012 12:37 AM, Pieter Hintjens wrote:
 On Wed, May 9, 2012 at 7:45 AM, Steffen Mueller
 zer...@steffen-mueller.net  wrote:

 Any chance I could have such a functionality? Of course, being able to
 determine whether there are any peers connected would be great, too.

 The standard answer to can I do funky routing model XYZ over 0MQ is
 yes, you make it yourself using ROUTER sockets. This is why we
 called them that. Particularly, the built-in HWM logic is difficult to
 mix with routing and people who try this always end up frustrated.
 Happily it is easy to layer on top.

 It looks like you want a combination of reliable service-oriented
 queue and credit-based flow control. Read the Guide section on
 reliable request reply, in detail. Then read
 http://unprotocols.org/blog:15. Mix this together, taste, improve,
 repeat.

Frankly, I fail to see what's so funky about not wanting to block the 
sender if the receiver is restarted, using the queuing of the library[1].

Something as conceptually simple as that can help a great deal in 
decoupling component reliability in an HA setup if you don't have to 
guarantee timeliness (which is virtually impossible anyway in a 
distributed system during a partial failure).

Either way, I understand the message (and Chuck's, too). I'll roll my 
own solution.

--Steffen

[1] The connect/bind asymmetry that Chuck pointed out might help in my 
case, but it makes scaling the number of receivers a little more 
awkward, so I'll have to evaluate the pros and cons carefully.
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev