Marc Lehmann wrote:
> On Sun, Nov 04, 2007 at 05:04:25PM -0800, Scott Lamb <[EMAIL PROTECTED]> 
> wrote:
>>> * multiple watchers can wait for the same event, there is no limitation
>>>   to one or two watchers for signals and io.
>> Could you give me an example of where that is important?
> 
> Mostly in environments using some form garbage collection. For example,
> this idiom is common in Perl:
> 
>    $event = EV::io $fd, ...
> 
> If $event happens to contain an old watcher and $fd happens to refer to
> the same fd as that old watcher, this will lead into probelms as the
> watchers are both alive for a short time.
> 
> There is actually a lot of code that relies on this just working, and the
> only other event loop I know that has a problem with this is Tk.

Ugh, I'd argue that idiom is broken. But if the support's free, I guess
it doesn't matter.

>>> * there are two types of timers, based on real time differences and wall
>>>   clock time (cron-like). timers can also be repeating and be reset at
>>>   almost no cost (for idle timeouts used by many network servers). time 
>>> jumps
>>>   get detected reliably in both directions with or without a monotonic 
>>> clock.
>> (See my other mail about Linux's new timerfd facility.)
> 
> (timerfd unfortunately makes little sense for this, as it adds overhead
> but I can't see the compelling advantage, as one will still run into the
> same time jump problems with periodic timers).
> 
>> Nice; repeating and absolute timers have come up several times before, too.
> 
> This was something I always missed in event loops. That is, some event
> loops have one timer type, some the other, but never both.
> 
>>> * timers are managed by a priority queue (O(1) for important operations
>>>   as opposed to O(log n) in libevent, also resulting in much simpler code).
>> In terms of concrete data types, you appear to have used a binary heap?
>> So by "important operations" you mean removal, correct?
> 
> removal: O(log n)
> insertion: O(log n)
> find next: O(1)
> 
>> still O(log n)? The asymptotic behavior is no different, then, as
>> insertion happens at least as often as removal.
> 
> Yes, but:
> 
> a) finding the next timwer is a constant time issue
> b) a red-black tree is more than three times as slow
> 
> (see the updated benchmark at http://libev.schmorp.de/bench.html,
> especially the difference between the first (no timers) and the second
> examples (timers in use))

Ahh, very nice benchmarks.

> 
>>> * I added idle watchers, pid watchers and hook watchers into the event loop,
>>>   as is required for integration of other event-based libraries, without
>>>   having to force the use of some construct around event_loop.
>> Pardon my ignorance, but what are hook watchers?
> 
> if you want to plug-in other event-based libraries into the event loop you
> need to get to be able to hook into the event loop. this is what those
> watcher types provide.
> 
> the alternative would be to write your own event_loop with EV_NONBLOCK, but
> that isn't modular, that is, if you have two difefernt sofwtare modules
> having their own event_loop you *must* use, you lose. prepare/check watchers
> use this problem nicely.
> 
> A number of event loops have them, and they are useful for other things,
> such as transpoarently integrating coroutine packages etc.
> 
> Its not a killer feature, just very very useful in some cases.
> 
>> pid watchers I assume to be a fancy SIGCHLD handler?
> 
> Yes.
> 
>> That's a potentially useful feature, but why would it require a
>> construct around event_loop?
> 
> I don't udnerstand that, there is no construct around event_loop, its handled
> completely seperately.

My question was in response to your "I added idle watchers, pid watchers
and hook watchers into the event loop, as is required for integration of
other event-based libraries, without having to force the use of some
construct around event_loop."

> The reason is exists is allowing to share this potentially unsharable
> resource. For example, poll and select let you do "everything" (with fds),
> but you can of course only have one component per (single-thread) process
> using them, as they are blocking.
> 
> The same thing is true for signals: you can't share them with sigaction, as
> sigaction only allows one user.
> 
> And the same thing is true for sigchld.

Yes, I could see why sharing SIGCHLD would be useful. I was thinking of
this when asking above when you want to have multiple watchers for the
same event, but this was the only example I could think of off-hand, so
it seemed like two features to address the same use case.

>> A few notes:
>>
>> * what is the purpose of EV_COMMON?
> 
> Allowing customised event watchers. If you are concerned, treat it as a an
> internal symbol. Its use is documented here:
> http://cvs.schmorp.de/libev/README.embed
> 
>> From first glance, I'm concerned that it could not be used properly
>> unless libev.so and all callers are compiled with the same flags, which
>> seems impractical if the library ever gains wide use.
> 
> This is true, but its an optional feature you don't have to use.  In case
> you wonder, EV, the perl interface to libev, uses this feature.
> 
> It makes most sense when embedding, of course (not all the world is an .so
> :).

Hmm, in your Perl example, I wouldn't rule out you wanting to share the
event loop with some C-based library and being unable to do so.

All the world is an .so. ;)

There's definitely nothing you can't do with a void*, so this is all a
question of efficiency. I assert that the cost of a sizeof(void*) to
point to the relevant part of your structure (which can be
nearby...still reasonable cache locality) is not too high.

> 
>> * on ev_once failure, you're calling the callback with EV_ERROR? Yuck.
>> That's quite surprising behavior, and I could see it leading to stack
>> overflows as each ev_once tries to issue another one.
> 
> All callbacks will be called with EV_ERROR when an error occurs. And yes,
> if you don't do error handlign and endlessly retry the same operation in a
> loop, you run into problems.
> 
> But as that is an obvious programming bug, I don't see any problem here.

Hmm. Let me introduce a use case: an event-driven program which must not
fail. init or similar.

I worked on such a program recently. If it were unreliable, you would
have to send the system back to the factory for repair (i.e., flashing
new software). On ENOMEM, it would basically sleep and retry. This was
quite successful (memory could be temporarily consumed by network
buffers, etc, which cleared itself up after a while).

For this program, it's important to know more than that an error has
occurred. EV_ERROR is totally inadequate. You're using it for several
different cases. I spotted at least these three:

* malloc() failed in ev_once - transient runtime error.
* select() failed with ENOMEM, so libev chose to kill this file
descriptor and now is notifying userspace.
* bad file descriptor - probably a logic error.

What is my program supposed to do? It can't distinguish them, and the
correct behavior in each of these conditions is totally different. Also,
in the program I'm thinking of, "libev chose to kill this file
descriptor" probably means a network link just went down. Ergh.

> Besides, if you cannot malloc the few bytes ev_once requires you need a
> *lot* of good error handlign code to continue sensibly.

Yes, as I've mentioned above, there are programs for which this level of
error handling is necessary.

>> * What's your use case for ev_loop_new() and ev_loop_default()'s bitmask
>> of allowed implementations?
> 
> libevents unconditional use of getenv raised concerns with me and
> apperently some users on this list, too, so this is one way to disable
> this (EVMETHOD_ANY instead of EVMETHOD_ALL). Also, I am sure some apps
> want control over the allowed event loops, e.g. to rule out select becasue
> it is known to be not wrokign for them.

Ahh, I'd have to agree that getenv() seems sketchy. But with the
interface you've supplied, you can't simply blacklist select() without

> 
>> * (again, just skimming) you're closing fds automatically on ENOMEM?
>> Ergh.
> 
> There is little else to do. This isn't malloc or so, but a kernel
> interface, and usually due to hard limits (not really out of memory).
> 
> Point being, libev(ent) cannot continue in this condition, there cannot
> be any progress. Closing the fd and signalling the relevant part of the
> application is in no way different then a network overload or problem
> resulting in the same condition.
> 
> If the app cannot handle that, deep shit.

There are applications which can handle you returning ENOMEM to them on
select(), but basically no applications which can reliably handle you
randomly closing their file descriptors. A couple examples:

* any X11 application is screwed if you close its connection to the server
* any server is going to assume nothing will happen to its listen socket

> 
>> That seems rather undesirable for many applications.
> 
> Well, its arguably better than libevents behaviour, which is simply
> returning from event_loop, leaving the app unclear on what has happened
> and what to do.

libevent retains errno, I believe. (Not completely sure - the project I
described above used a version with a modified API. I'm not 100%
thrilled with the stock libevent API either.)

> In any case, you can get the same behaviour as libevent by calling unloop
> in case of an error, so the interface is strictly more powerful.

No, on error your library may have muddied the water already by screwing
with the file descriptors. libevent also makes errors clearer by simply
returning error from the failed function (I'm thinking of event_once()
vs ev_once() here.)

> Thanks a lot for your questions, I hope I could clarify some things and
> design decisions. Its indeed not easy to get everything right, and I am
> sure the ev.h API can get improvements. In some cases there have been
> design trade-offs (no mutexes leading to less automatic management for
> example).
> 
> I do think the design is useful in practise, where error handling is rarely
> done to the utmost extent and sensible behaviour in dead-end situations
> counts a lot.

There are a lot of sucky programs out there, but I think it's important
to not penalize the ones where it counts the most. Keep in mind that
libevent is used in a number of important programs. (Many servers; on
Linux, the NFS daemons; etc)


_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to