I'm having trouble getting timeouts to work.  According to the docs,
"libevent can assign timeout events to file descriptors that are triggered
whenever a certain amount of time has passed with no activity on a file
descriptor."

When I create a socket read event and pass a {12, 0} timeout to
event_add(), my callback gets called with EV_TIMEOUT 12 seconds after
startup, even when there have been read events just a second or two
earlier.  Contrary to the docs, libevent is triggering the timeout
regardless of activity on the socket.

I'm calling event_set() with EV_PERSIST, and building against libevent
1.3b on linux.  Are timeouts known to be broken with either of these?

Furthermore, the docs describe different behavior for timers and timeouts,
yet a look at event.h shows no difference between them:

#define evtimer_set(ev, cb, arg)        event_set(ev, -1, 0, cb, arg)
#define timeout_set(ev, cb, arg)        event_set(ev, -1, 0, cb, arg)
#define evtimer_add(ev, tv)             event_add(ev, tv)
#define timeout_add(ev, tv)             event_add(ev, tv)

Can someone explain that?

_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users

Reply via email to