Re: [zeromq-dev] [patch] handle SIGPIPE

2010-07-06 Thread Pieter Hintjens
Dhammika,

Thanks for the patch. We'll apply it asap. Please confirm you submit it
under MIT license... thanks.

-Pieter

Sent from my Android mobile phone.

On Jul 6, 2010 8:01 AM, "Dhammika Pathirana"  wrote:

Patch to handle SIGPIPE in send().
SIGPIPE behavior is not consistent even across *ix platforms. Linux
has MSG_NOSIGNAL and Mac supports SO_NOSIGPIPE. Best option is to set
SIG_IGN, but it's more of an application setting. We should document
this.

___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] [patch] handle SIGPIPE

2010-07-06 Thread Dhammika Pathirana
Sure, all my posts are submitted under MIT license.


On 7/6/10, Pieter Hintjens  wrote:
>
>
> Dhammika,
>
> Thanks for the patch. We'll apply it asap. Please confirm you submit it
> under MIT license... thanks.
>
> -Pieter
>
> Sent from my Android mobile phone.
>
>
> On Jul 6, 2010 8:01 AM, "Dhammika Pathirana"  wrote:
>
> Patch to handle SIGPIPE in send().
>  SIGPIPE behavior is not consistent even across *ix platforms. Linux
>  has MSG_NOSIGNAL and Mac supports SO_NOSIGPIPE. Best option is to set
>  SIG_IGN, but it's more of an application setting. We should document
>  this.
>
> ___
>  zeromq-dev mailing list
>  zeromq-dev@lists.zeromq.org
>  http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
>
>
> ___
>  zeromq-dev mailing list
>  zeromq-dev@lists.zeromq.org
>  http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] forking ZMQ_PAIR socket

2010-07-06 Thread hamster
Hello!

I am a newbie with 0mq so I beg my pardon if this question looks boring, yet
quick overview of docs/faqs/maillist gave me no answer.

Is there any way to setup two-way 0mq communication in "usual" client-server
style? I mean the server which listens on a given port and then "forks" for
every incoming client connection taking it with new process/thread? As far as I
understand ZMQ_REQ/ZMQ_REP sockets are unidirectional and ZMQ_PAIR does not
allow more than one client to connect to that "zmq_bound" socket.


___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] [patch] handle SIGPIPE

2010-07-06 Thread Martin Lucina
dhamm...@gmail.com said:
> Patch to handle SIGPIPE in send().
> SIGPIPE behavior is not consistent even across *ix platforms. Linux
> has MSG_NOSIGNAL and Mac supports SO_NOSIGPIPE. Best option is to set
> SIG_IGN, but it's more of an application setting. We should document
> this.

Why would we need to deal with this at all? 0MQ I/O threads already ignore
all signals which includes SIGPIPE. Or is there some case you're hitting
that's not being handled correctly?

-mato

___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] i can't see what i am doing wrong

2010-07-06 Thread Martin Lucina
elliso...@gmail.com said:
> I have run into this many times.  The issue is that the SUB socket
> doesn't get the messages until it actually connects:
> 
> If the PUB starts before the SUB, the PUB will start broadcasting
> before the SUB starts and the SUB won't get those messages that were
> sent before the SUB connects.  A PUB socket is like a radio broadcast.
>  If you aren't listening, you don't get the messages.
> 
> BUT (this is more subtle).  If the SUB starts before the PUB, you will
> still miss messages.  This is because it takes a little bit of time (I
> think 0.1 sec) for the SUB socket to realize the PUB socket has
> started.  In that short time interval, the PUB socket has already
> started sending and you miss a few.

This should be a non-issue in the simple case where a user starts the
subscriber (process b in the example) 1st, then switches to another window
and starts the publisher (process a) 2nd.

There is a more subtle thing going on here though which is that in process
a zmq_send() is async so this just means "queue for sending". If you queue
quicker than the *actual* send to the network happens and call zmq_term()
you will lose outstanding data. 

There's no real solution to this right now except adding a sleep(100) in
the publisher, or as Brian says using some other way to synchronize "Start
of data" and "End of data".

-mato
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] zmq_close() semantics and handling outstanding messages

2010-07-06 Thread Martin Lucina
Hi all,

while implementing a 0MQ architecture which needs to dynamically create and
destroy sockets during operation I ran into the current behaviour of
zmq_close() being semantically different from the standard close() system
call.

Consider a scenario where we wish to send a bunch of messages, then close
the socket:

 zmq_send (s, ...)
 zmq_send (s, ...)
 zmq_send (s, ...)
 zmq_close (s)

The current behaviour is that zmq_close() will discard any messages which
have been queued ("sent") with zmq_send() but have not yet been pushed out
to the network. Contrast this with the behaviour of the close() system call
on a standard socket where the call means "please make this socket go away,
but finish sending any outstanding data on it asynchronously if you
can"[1].

In my opinion the proper solution is to use the same semantics as the
close() system call, in other words, zmq_close() shall invalidate the
socket from the caller's point of view so no further operations may be
performed on it, but 0MQ shall send any outstanding messages in the
background *as long as a endpoint for those messages still exists* before
destroying the socket "for real".

This would mean a second change to the API which would make zmq_term() a
blocking call, since it would need to wait until all outstanding messages
are sent. The analogous functionality for the close() system call is
handled by the OS kernel -- obviously if the OS shuts down then data will
be lost.

The downside is that zmq_term() could freeze for an arbitrary amount of
time if the remote end is "stuck". For applications where this is
undesirable it would mean adding a "KILL" flag or separate zmq_term_kill()
function which means "we don't care, really go away now".

Please let me know your opinions on this change; ultimately I think it's
the right way to go especially if OS integration of 0MQ sockets is (a long
way) down the road.

-mato

[1] This behaviour can be changed using the SO_LINGER option, we'd probably
want to implement a similar option for 0MQ sockets.
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zmq_close() semantics and handling outstanding messages

2010-07-06 Thread Matt Weinstein


On Jul 6, 2010, at 1:24 PM, Martin Lucina wrote:

> Hi all,
>
> while implementing a 0MQ architecture which needs to dynamically  
> create and
> destroy sockets during operation I ran into the current behaviour of
> zmq_close() being semantically different from the standard close()  
> system
> call.
>
> Consider a scenario where we wish to send a bunch of messages, then  
> close
> the socket:
>
> zmq_send (s, ...)
> zmq_send (s, ...)
> zmq_send (s, ...)
> zmq_close (s)
>
> The current behaviour is that zmq_close() will discard any messages  
> which
> have been queued ("sent") with zmq_send() but have not yet been  
> pushed out
> to the network. Contrast this with the behaviour of the close()  
> system call
> on a standard socket where the call means "please make this socket  
> go away,
> but finish sending any outstanding data on it asynchronously if you
> can"[1].
>
> In my opinion the proper solution is to use the same semantics as the
> close() system call, in other words, zmq_close() shall invalidate the
> socket from the caller's point of view so no further operations may be
> performed on it, but 0MQ shall send any outstanding messages in the
> background *as long as a endpoint for those messages still exists*  
> before
> destroying the socket "for real".
>
Would this be logical to implement as a new zmq_setsockopt() option?

> This would mean a second change to the API which would make  
> zmq_term() a
> blocking call, since it would need to wait until all outstanding  
> messages
> are sent. The analogous functionality for the close() system call is
> handled by the OS kernel -- obviously if the OS shuts down then data  
> will
> be lost.

And I'm looking for a way to dynamically change the number of  
concurrent user threads available for a context, so maybe it's time  
for zmq_setcontextopt()? ;-)

>
> The downside is that zmq_term() could freeze for an arbitrary amount  
> of
> time if the remote end is "stuck". For applications where this is
> undesirable it would mean adding a "KILL" flag or separate  
> zmq_term_kill()
> function which means "we don't care, really go away now".
>
> Please let me know your opinions on this change; ultimately I think  
> it's
> the right way to go especially if OS integration of 0MQ sockets is  
> (a long
> way) down the road.
>
> -mato
>
> [1] This behaviour can be changed using the SO_LINGER option, we'd  
> probably
> want to implement a similar option for 0MQ sockets.
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev

___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zmq_close() semantics and handling outstanding messages

2010-07-06 Thread Martin Lucina
matt_weinst...@yahoo.com said:
> > In my opinion the proper solution is to use the same semantics as the
> > close() system call, in other words, zmq_close() shall invalidate the
> > socket from the caller's point of view so no further operations may be
> > performed on it, but 0MQ shall send any outstanding messages in the
> > background *as long as a endpoint for those messages still exists*  
> > before
> > destroying the socket "for real".
> >
> Would this be logical to implement as a new zmq_setsockopt() option?

Ultimately the semantics will *have* to change if 0MQ sockets would be
integrated into the OS.

If your primary priority is backward compatibility, then yes, the "new"
behaviour would have to become a socket option. I'm not convinced that
keeping incorrect behaviour for the sake of backward compatibility is a
good idea and my view is that the current behaviour is definitely incorrect
in the long run.

> 
> > This would mean a second change to the API which would make  
> > zmq_term() a
> > blocking call, since it would need to wait until all outstanding  
> > messages
> > are sent. The analogous functionality for the close() system call is
> > handled by the OS kernel -- obviously if the OS shuts down then data  
> > will
> > be lost.
> 
> And I'm looking for a way to dynamically change the number of  
> concurrent user threads available for a context, so maybe it's time  
> for zmq_setcontextopt()? ;-)

Probably. get/setcontextopt() are the equivalent of sysctl() or similar in
the OS space.

-mato
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zmq_close() semantics and handling outstanding messages

2010-07-06 Thread Brian Granger
Martin,

On Tue, Jul 6, 2010 at 10:24 AM, Martin Lucina  wrote:
> Hi all,
>
> while implementing a 0MQ architecture which needs to dynamically create and
> destroy sockets during operation I ran into the current behaviour of
> zmq_close() being semantically different from the standard close() system
> call.

I have run into this issue as well.

> Consider a scenario where we wish to send a bunch of messages, then close
> the socket:
>
>  zmq_send (s, ...)
>  zmq_send (s, ...)
>  zmq_send (s, ...)
>  zmq_close (s)
>
> The current behaviour is that zmq_close() will discard any messages which
> have been queued ("sent") with zmq_send() but have not yet been pushed out
> to the network. Contrast this with the behaviour of the close() system call
> on a standard socket where the call means "please make this socket go away,
> but finish sending any outstanding data on it asynchronously if you
> can"[1].

The other issue with the current API is that it is non-deterministic.
Depending on the timing of when zmq_close is called, the messages
may or may not get sent.

> In my opinion the proper solution is to use the same semantics as the
> close() system call, in other words, zmq_close() shall invalidate the
> socket from the caller's point of view so no further operations may be
> performed on it, but 0MQ shall send any outstanding messages in the
> background *as long as a endpoint for those messages still exists* before
> destroying the socket "for real".

+1

> This would mean a second change to the API which would make zmq_term() a
> blocking call, since it would need to wait until all outstanding messages
> are sent. The analogous functionality for the close() system call is
> handled by the OS kernel -- obviously if the OS shuts down then data will
> be lost.

+1

> The downside is that zmq_term() could freeze for an arbitrary amount of
> time if the remote end is "stuck". For applications where this is
> undesirable it would mean adding a "KILL" flag or separate zmq_term_kill()
> function which means "we don't care, really go away now".
>
> Please let me know your opinions on this change; ultimately I think it's
> the right way to go especially if OS integration of 0MQ sockets is (a long
> way) down the road.

I think this would be a great change in the API.

Cheers,

Brian

> -mato
>
> [1] This behaviour can be changed using the SO_LINGER option, we'd probably
> want to implement a similar option for 0MQ sockets.
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgran...@calpoly.edu
elliso...@gmail.com
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zmq_close() semantics and handling outstanding messages

2010-07-06 Thread Brian Granger
Oh, I forgot the mention another point about this.

I have started to put time.sleep(?) calls at the end of my zmq
applications because of this.  But, this has a huge problem that is
almost impossible to fix with the current API.  The amount of time I
need to sleep before calling zmq_term is unpredictable and depends on
things like:

* This number of outstanding messages.
* The size of the outstanding messages.

There isn't really any way of knowing these things in general, so if
my application will have lots of large messages, I have to put in
*really* long sleeps before shutting down.  I would much rather have
zmq_term block for exactly the right amount of time.

Cheers,

Brian

On Tue, Jul 6, 2010 at 10:24 AM, Martin Lucina  wrote:
> Hi all,
>
> while implementing a 0MQ architecture which needs to dynamically create and
> destroy sockets during operation I ran into the current behaviour of
> zmq_close() being semantically different from the standard close() system
> call.
>
> Consider a scenario where we wish to send a bunch of messages, then close
> the socket:
>
>  zmq_send (s, ...)
>  zmq_send (s, ...)
>  zmq_send (s, ...)
>  zmq_close (s)
>
> The current behaviour is that zmq_close() will discard any messages which
> have been queued ("sent") with zmq_send() but have not yet been pushed out
> to the network. Contrast this with the behaviour of the close() system call
> on a standard socket where the call means "please make this socket go away,
> but finish sending any outstanding data on it asynchronously if you
> can"[1].
>
> In my opinion the proper solution is to use the same semantics as the
> close() system call, in other words, zmq_close() shall invalidate the
> socket from the caller's point of view so no further operations may be
> performed on it, but 0MQ shall send any outstanding messages in the
> background *as long as a endpoint for those messages still exists* before
> destroying the socket "for real".
>
> This would mean a second change to the API which would make zmq_term() a
> blocking call, since it would need to wait until all outstanding messages
> are sent. The analogous functionality for the close() system call is
> handled by the OS kernel -- obviously if the OS shuts down then data will
> be lost.
>
> The downside is that zmq_term() could freeze for an arbitrary amount of
> time if the remote end is "stuck". For applications where this is
> undesirable it would mean adding a "KILL" flag or separate zmq_term_kill()
> function which means "we don't care, really go away now".
>
> Please let me know your opinions on this change; ultimately I think it's
> the right way to go especially if OS integration of 0MQ sockets is (a long
> way) down the road.
>
> -mato
>
> [1] This behaviour can be changed using the SO_LINGER option, we'd probably
> want to implement a similar option for 0MQ sockets.
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgran...@calpoly.edu
elliso...@gmail.com
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] i can't see what i am doing wrong

2010-07-06 Thread Brian Granger
On Tue, Jul 6, 2010 at 9:52 AM, Martin Lucina  wrote:
> elliso...@gmail.com said:
>> I have run into this many times.  The issue is that the SUB socket
>> doesn't get the messages until it actually connects:
>>
>> If the PUB starts before the SUB, the PUB will start broadcasting
>> before the SUB starts and the SUB won't get those messages that were
>> sent before the SUB connects.  A PUB socket is like a radio broadcast.
>>  If you aren't listening, you don't get the messages.
>>
>> BUT (this is more subtle).  If the SUB starts before the PUB, you will
>> still miss messages.  This is because it takes a little bit of time (I
>> think 0.1 sec) for the SUB socket to realize the PUB socket has
>> started.  In that short time interval, the PUB socket has already
>> started sending and you miss a few.
>
> This should be a non-issue in the simple case where a user starts the
> subscriber (process b in the example) 1st, then switches to another window
> and starts the publisher (process a) 2nd.

It depends on which side does the bind and which does the connect.  If
the SUB socket does the connect you will still miss messages
if you start SUB first.  The SUB socket will attempts to connect every
0.1 seconds and during that time it will miss them.

> There is a more subtle thing going on here though which is that in process
> a zmq_send() is async so this just means "queue for sending". If you queue
> quicker than the *actual* send to the network happens and call zmq_term()
> you will lose outstanding data.

Yep, but hopefully that will be fixed.

> There's no real solution to this right now except adding a sleep(100) in
> the publisher, or as Brian says using some other way to synchronize "Start
> of data" and "End of data".

> -mato
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgran...@calpoly.edu
elliso...@gmail.com
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] i can't see what i am doing wrong

2010-07-06 Thread Andrew Hume

okay, i gather you sleep before you start. fine.

i can do a handshake around the exit thing IF i can find out if
the context is finished its work. is there a way to do that?

On Jul 6, 2010, at 3:23 PM, Brian Granger wrote:


On Tue, Jul 6, 2010 at 9:52 AM, Martin Lucina  wrote:

elliso...@gmail.com said:

I have run into this many times.  The issue is that the SUB socket
doesn't get the messages until it actually connects:

If the PUB starts before the SUB, the PUB will start broadcasting
before the SUB starts and the SUB won't get those messages that were
sent before the SUB connects.  A PUB socket is like a radio  
broadcast.

 If you aren't listening, you don't get the messages.

BUT (this is more subtle).  If the SUB starts before the PUB, you  
will
still miss messages.  This is because it takes a little bit of  
time (I

think 0.1 sec) for the SUB socket to realize the PUB socket has
started.  In that short time interval, the PUB socket has already
started sending and you miss a few.


This should be a non-issue in the simple case where a user starts the
subscriber (process b in the example) 1st, then switches to  
another window

and starts the publisher (process a) 2nd.


It depends on which side does the bind and which does the connect.  If
the SUB socket does the connect you will still miss messages
if you start SUB first.  The SUB socket will attempts to connect every
0.1 seconds and during that time it will miss them.

There is a more subtle thing going on here though which is that in  
process
a zmq_send() is async so this just means "queue for sending". If  
you queue
quicker than the *actual* send to the network happens and call  
zmq_term()

you will lose outstanding data.


Yep, but hopefully that will be fixed.

There's no real solution to this right now except adding a sleep 
(100) in
the publisher, or as Brian says using some other way to  
synchronize "Start

of data" and "End of data".



-mato
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev





--
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgran...@calpoly.edu
elliso...@gmail.com
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev



--
Andrew Hume  (best -> Telework) +1 732-886-1886
and...@research.att.com  (Work) +1 973-360-8651
AT&T Labs - Research; member of USENIX and LOPSA



___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Problems with PUB/SUB sockets and OpenPGM

2010-07-06 Thread Steven McCoy
On 30 June 2010 20:30, Jens Mehnert  wrote:

>  Hello Steve, thanks for the quick answer. What about my other problem.
> Isn't it possible to bind /connect a sender/receiver
> to two different sockets on the same interface like this:
>
> private static final ZMQ.Context CONTEXT = ZMQ.context (1);
> private static final ZMQ.Socket PRODUCER_SOCKET = CONTEXT.socket(ZMQ.PUB);
> private static final ZMQ.Socket CONSUMER_SOCKET = CONTEXT.socket(ZMQ.SUB);
> ...
>
> PRODUCER_SOCKET.bind("tcp://127.0.0.1:");
>
> CONSUMER_SOCKET.setsockopt(ZMQ.SUBSCRIBE, "");
> CONSUMER_SOCKET.connect("tcp://127.0.0.1:");
> ...
>
> Producer Thread:
>
> while(!Thread.currentThread().interrupted()) {
> try {
> String value = TinyUUID.randomTinyUUID().toString();
> LOG.debug("Send " + value);
> PRODUCER_SOCKET.send(value.getBytes(), 0);
> Thread.sleep(1000);
> } catch (Exception e) {
> throw new RuntimeException(e);
> }
> }
>
> Consumer Thread (Runnable scheduled via ExecutorService):
>
>
> while (!Thread.currentThread().isInterrupted()) {
> try {
> LOG.debug("Receiving ...");
>
> byte[] rawMessage = CONSUMER_SOCKET.recv(0);
> LOG.debug("Received %s", new String(rawMessage));
>
> } catch (Exception e) {
> LOG.error("An error occured handling zmq message ...");
> }
> }
>
> The problem is that the send thread using the producer socket blocks on
> send.  If I don't
> start the consumer send() does not block. Furthermore, If I use the two
> different VMs (
> one producer, one consumer main app) it works splendidly too.
>
> What am I doing wrong?
>
> Best regards, Jens
>
>

Bump,  I answered the PGM part, no idea about the ZMQ part.  Hopefully
someone else answers.

-- 
Steve-o
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] forking ZMQ_PAIR socket

2010-07-06 Thread Peter Alexander
Hi.

On Tue, Jul 6, 2010 at 11:57 AM, hamster  wrote:
> Hello!
>
> I am a newbie with 0mq so I beg my pardon if this question looks boring, yet
> quick overview of docs/faqs/maillist gave me no answer.

Yeah doc organization is still a work in progress. But as your
learning feel free to contribute by signing in to the web site, which
is a wiki, and making any changes.

>
> Is there any way to setup two-way 0mq communication in "usual" client-server
> style? I mean the server which listens on a given port and then "forks" for
> every incoming client connection taking it with new process/thread?

Here's an example http://www.zeromq.org/blog:multithreaded-server

As far as I
> understand ZMQ_REQ/ZMQ_REP sockets are unidirectional

Typically, a ZMQ_REQ socket (client) sends a request message to a
ZMQ_REP (server) socket. The ZMQ_REQ socket blocks until the reply
message from the ZMQ_REP socket is returned to the ZMQ_REQ socket.

> and ZMQ_PAIR does not
> allow more than one client to connect to that "zmq_bound" socket.
>
>
>
___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] [patch] handle SIGPIPE

2010-07-06 Thread Dhammika Pathirana
On 7/6/10, Martin Lucina  wrote:
> dhamm...@gmail.com said:
>  > Patch to handle SIGPIPE in send().
>  > SIGPIPE behavior is not consistent even across *ix platforms. Linux
>  > has MSG_NOSIGNAL and Mac supports SO_NOSIGPIPE. Best option is to set
>  > SIG_IGN, but it's more of an application setting. We should document
>  > this.
>
>
> Why would we need to deal with this at all? 0MQ I/O threads already ign
How does 0MQ ignore signals? I don't see a signal handler or SIG_IGN flag.
Default SIGPIPE behavior is to terminate the process.ore
>  all signals which includes SIGPIPE. Or is there some case you're hitting
>  that's not being handled correctly?
>



How does 0MQ ignore signals? I don't see a signal handler or SIG_IGN flag.
Default SIGPIPE behavior is to terminate the process.
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] [patch] handle SIGPIPE

2010-07-06 Thread Steven McCoy
On 7 July 2010 11:29, Dhammika Pathirana  wrote:

> How does 0MQ ignore signals? I don't see a signal handler or SIG_IGN flag.
> Default SIGPIPE behavior is to terminate the process.ore
>
>
It ignores them by not handling them at all.  The application developer
needs to manage them.

At a guess.

-- 
Steve-o
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zmq_close() semantics and handling outstanding messages

2010-07-06 Thread Peter Alexander
Hi Martin

On Tue, Jul 6, 2010 at 1:51 PM, Martin Lucina  wrote:
> matt_weinst...@yahoo.com said:
>> > In my opinion the proper solution is to use the same semantics as the
>> > close() system call, in other words, zmq_close() shall invalidate the
>> > socket from the caller's point of view so no further operations may be
>> > performed on it, but 0MQ shall send any outstanding messages in the
>> > background *as long as a endpoint for those messages still exists*
>> > before
>> > destroying the socket "for real".
>> >
>> Would this be logical to implement as a new zmq_setsockopt() option?
>
> Ultimately the semantics will *have* to change if 0MQ sockets would be
> integrated into the OS.
>
> If your primary priority is backward compatibility, then yes, the "new"
> behaviour would have to become a socket option. I'm not convinced that
> keeping incorrect behaviour for the sake of backward compatibility is a
> good idea and my view is that the current behaviour is definitely incorrect
> in the long run.

Is it time to layout a road-map document and start a zeromq3
development branch on GitHub? This should be where changes that break
backwards compatibility will go and can take shape for the next
generation of 0mq.

I realize the following is known by everybody, but there seems to have
been some confusion at times still.

" In principle, in subsequent releases, the major number is increased
when there are significant jumps in functionality [breakable], the
minor  number is incremented when only minor features or significant
fixes have been added [non-breakable], and the revision number is
incremented when minor bugs are fixed." [1]

[1] http://en.wikipedia.org/wiki/Software_versioning

>
>>
>> > This would mean a second change to the API which would make
>> > zmq_term() a
>> > blocking call, since it would need to wait until all outstanding
>> > messages
>> > are sent. The analogous functionality for the close() system call is
>> > handled by the OS kernel -- obviously if the OS shuts down then data
>> > will
>> > be lost.
>>
>> And I'm looking for a way to dynamically change the number of
>> concurrent user threads available for a context, so maybe it's time
>> for zmq_setcontextopt()? ;-)
>
> Probably. get/setcontextopt() are the equivalent of sysctl() or similar in
> the OS space.
>
> -mato
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zmq_close() semantics and handling outstanding messages

2010-07-06 Thread Dhammika Pathirana
But how is this different from network or remote host queuing/dropping
messages?
Sending queued messages doesn't really guarantee delivery of messages.

This gets even worse as TCP sends RST (ECONNRESET) on receiving data
to a closed socket. In http world they work around this by sender
doing a half close, receiver reading EOF and closing its end.



On 7/6/10, Martin Lucina  wrote:
> Hi all,
>
>  while implementing a 0MQ architecture which needs to dynamically create and
>  destroy sockets during operation I ran into the current behaviour of
>  zmq_close() being semantically different from the standard close() system
>  call.
>
>  Consider a scenario where we wish to send a bunch of messages, then close
>  the socket:
>
>   zmq_send (s, ...)
>   zmq_send (s, ...)
>   zmq_send (s, ...)
>   zmq_close (s)
>
>  The current behaviour is that zmq_close() will discard any messages which
>  have been queued ("sent") with zmq_send() but have not yet been pushed out
>  to the network. Contrast this with the behaviour of the close() system call
>  on a standard socket where the call means "please make this socket go away,
>  but finish sending any outstanding data on it asynchronously if you
>  can"[1].
>
>  In my opinion the proper solution is to use the same semantics as the
>  close() system call, in other words, zmq_close() shall invalidate the
>  socket from the caller's point of view so no further operations may be
>  performed on it, but 0MQ shall send any outstanding messages in the
>  background *as long as a endpoint for those messages still exists* before
>  destroying the socket "for real".
>
>  This would mean a second change to the API which would make zmq_term() a
>  blocking call, since it would need to wait until all outstanding messages
>  are sent. The analogous functionality for the close() system call is
>  handled by the OS kernel -- obviously if the OS shuts down then data will
>  be lost.
>
>  The downside is that zmq_term() could freeze for an arbitrary amount of
>  time if the remote end is "stuck". For applications where this is
>  undesirable it would mean adding a "KILL" flag or separate zmq_term_kill()
>  function which means "we don't care, really go away now".
>
>  Please let me know your opinions on this change; ultimately I think it's
>  the right way to go especially if OS integration of 0MQ sockets is (a long
>  way) down the road.
>
>  -mato
>
>  [1] This behaviour can be changed using the SO_LINGER option, we'd probably
>  want to implement a similar option for 0MQ sockets.
>  ___
>  zeromq-dev mailing list
>  zeromq-dev@lists.zeromq.org
>  http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev