On Sun, Nov 04, 2007 at 05:04:25PM -0800, Scott Lamb <[EMAIL PROTECTED]> wrote:
> > * multiple watchers can wait for the same event, there is no limitation
> >   to one or two watchers for signals and io.
> Could you give me an example of where that is important?

Mostly in environments using some form garbage collection. For example,
this idiom is common in Perl:

   $event = EV::io $fd, ...

If $event happens to contain an old watcher and $fd happens to refer to
the same fd as that old watcher, this will lead into probelms as the
watchers are both alive for a short time.

There is actually a lot of code that relies on this just working, and the
only other event loop I know that has a problem with this is Tk.

> > * there are two types of timers, based on real time differences and wall
> >   clock time (cron-like). timers can also be repeating and be reset at
> >   almost no cost (for idle timeouts used by many network servers). time 
> > jumps
> >   get detected reliably in both directions with or without a monotonic 
> > clock.
> (See my other mail about Linux's new timerfd facility.)

(timerfd unfortunately makes little sense for this, as it adds overhead
but I can't see the compelling advantage, as one will still run into the
same time jump problems with periodic timers).

> Nice; repeating and absolute timers have come up several times before, too.

This was something I always missed in event loops. That is, some event
loops have one timer type, some the other, but never both.

> > * timers are managed by a priority queue (O(1) for important operations
> >   as opposed to O(log n) in libevent, also resulting in much simpler code).
> In terms of concrete data types, you appear to have used a binary heap?
> So by "important operations" you mean removal, correct?

removal: O(log n)
insertion: O(log n)
find next: O(1)

> still O(log n)? The asymptotic behavior is no different, then, as
> insertion happens at least as often as removal.

Yes, but:

a) finding the next timwer is a constant time issue
b) a red-black tree is more than three times as slow

(see the updated benchmark at http://libev.schmorp.de/bench.html,
especially the difference between the first (no timers) and the second
examples (timers in use))

> > * I added idle watchers, pid watchers and hook watchers into the event loop,
> >   as is required for integration of other event-based libraries, without
> >   having to force the use of some construct around event_loop.
> Pardon my ignorance, but what are hook watchers?

if you want to plug-in other event-based libraries into the event loop you
need to get to be able to hook into the event loop. this is what those
watcher types provide.

the alternative would be to write your own event_loop with EV_NONBLOCK, but
that isn't modular, that is, if you have two difefernt sofwtare modules
having their own event_loop you *must* use, you lose. prepare/check watchers
use this problem nicely.

A number of event loops have them, and they are useful for other things,
such as transpoarently integrating coroutine packages etc.

Its not a killer feature, just very very useful in some cases.

> pid watchers I assume to be a fancy SIGCHLD handler?


> That's a potentially useful feature, but why would it require a
> construct around event_loop?

I don't udnerstand that, there is no construct around event_loop, its handled
completely seperately.

The reason is exists is allowing to share this potentially unsharable
resource. For example, poll and select let you do "everything" (with fds),
but you can of course only have one component per (single-thread) process
using them, as they are blocking.

The same thing is true for signals: you can't share them with sigaction, as
sigaction only allows one user.

And the same thing is true for sigchld.

If your event loop provides support for it, you will less likely run into
a situation where two sofwtare packages in the same process need access to
it and stomp over each other.

> > * the backends use a much simpler design. unlike in libevent, the code to
> >   handle events is not duplicated for each backend, backends deal only
> >   with file descriptor events and a single timeout value, everything else
> >   is handled by the core, which also optimises state changes (the epoll
> >   backend is 100 lines in libev, as opposed to >350 lines in libevent,
> >   without suffering from its limitations).
> Nice.

And while investigating the WIN32-Code/win32.c libevent backend, I found
out that its just a glorified variant of the select backend, except its
O(n) in registering and deregistering.

> > As for compatibility, the actual libev api is very different to the
> > libevent API (although the design is similar), but there is a emulation
> > layer with a corresponding event.h file that supports the event library
> > (but no evbuffer, evnds, evhttp etc.).
> I think the API needs more hashing out. It is...different...but I'm not
> sure it's necessarily better, and I don't like change for change's sake.

There has been no change for changes sake, I can explain the rationale behind
each and every change (I hope :).

> A few notes:
> * what is the purpose of EV_COMMON?

Allowing customised event watchers. If you are concerned, treat it as a an
internal symbol. Its use is documented here:

> From first glance, I'm concerned that it could not be used properly
> unless libev.so and all callers are compiled with the same flags, which
> seems impractical if the library ever gains wide use.

This is true, but its an optional feature you don't have to use.  In case
you wonder, EV, the perl interface to libev, uses this feature.

It makes most sense when embedding, of course (not all the world is an .so

> * on ev_once failure, you're calling the callback with EV_ERROR? Yuck.
> That's quite surprising behavior, and I could see it leading to stack
> overflows as each ev_once tries to issue another one.

All callbacks will be called with EV_ERROR when an error occurs. And yes,
if you don't do error handlign and endlessly retry the same operation in a
loop, you run into problems.

But as that is an obvious programming bug, I don't see any problem here.

Besides, if you cannot malloc the few bytes ev_once requires you need a
*lot* of good error handlign code to continue sensibly.

> * What's your use case for ev_loop_new() and ev_loop_default()'s bitmask
> of allowed implementations?

libevents unconditional use of getenv raised concerns with me and
apperently some users on this list, too, so this is one way to disable
this (EVMETHOD_ANY instead of EVMETHOD_ALL). Also, I am sure some apps
want control over the allowed event loops, e.g. to rule out select becasue
it is known to be not wrokign for them.

> * (again, just skimming) you're closing fds automatically on ENOMEM?
> Ergh.

There is little else to do. This isn't malloc or so, but a kernel
interface, and usually due to hard limits (not really out of memory).

Point being, libev(ent) cannot continue in this condition, there cannot
be any progress. Closing the fd and signalling the relevant part of the
application is in no way different then a network overload or problem
resulting in the same condition.

If the app cannot handle that, deep shit.

> That seems rather undesirable for many applications.

Well, its arguably better than libevents behaviour, which is simply
returning from event_loop, leaving the app unclear on what has happened
and what to do.

In any case, you can get the same behaviour as libevent by calling unloop
in case of an error, so the interface is strictly more powerful.

Thanks a lot for your questions, I hope I could clarify some things and
design decisions. Its indeed not easy to get everything right, and I am
sure the ev.h API can get improvements. In some cases there have been
design trade-offs (no mutexes leading to less automatic management for

I do think the design is useful in practise, where error handling is rarely
done to the utmost extent and sensible behaviour in dead-end situations
counts a lot.

                The choice of a       Deliantra, the free code+content MORPG
      -----==-     _GNU_              http://www.deliantra.net
      ----==-- _       generation
      ---==---(_)__  __ ____  __      Marc Lehmann
      --==---/ / _ \/ // /\ \/ /      [EMAIL PROTECTED]
      -=====/_/_//_/\_,_/ /_/\_\
Libevent-users mailing list

Reply via email to