Issue was solved. This thread discusses an identical problem and its
solution:
https://www.mail-archive.com/libev%40lists.schmorp.de/msg01767.html
http://lists.schmorp.de/pipermail/libev/2012q1/001786.html
On 1/14/2016 6:11 PM, Ed Bukhman had written:
To follow up on this issue, it is consistently taking exactly 60
seconds for the watcher to be triggered in the client for the first
time, regardless of the frequency/size of messages published by the
server. This "feels" like some sort of a timeout at system level, but
that's about all I've been able to determine so far.
On 1/13/2016 1:41 AM, Ed Bukhman had written:
I'm hoping there is a good explanation for the behavior I'm seeing. I
have a relatively simple client application which connects to a
server and subscribes to a feed. The socket on the client is then
associated with an io watcher running in the main event loop, so that
messages are received and processed in a non-blocking way. The
problem I'm experiencing is that it takes anywhere from 30 seconds to
a minute for the watcher to be triggered for the first time, even
though the server is emitting a message every couple of seconds. Once
it has been triggered the first time, it then starts to respond to
the messages as it is supposed to. Neither the size of the messages
nor the frequency appears to have much influence on this delay.
I have confirmed with tcpdump that the server does in fact send every
message. Also, if I take the recv() logic out of the callback
function associated with the watcher and run it synchronously,
everything works properly.
I'm at a loss to explain the observed behavior, and would appreciate
either an explanation, or follow-up questions that would allow us to
get to the cause of this.
Thank you
--Ed
_______________________________________________
libev mailing list
[email protected]
http://lists.schmorp.de/mailman/listinfo/libev