On Thu, Jun 26, 2008 at 12:45 AM, Bill Bogstad <[EMAIL PROTECTED]> wrote: > Sure but any process that is waiting for a file to appear in the > filesystem seem more like a batch process to me. There is no way to > know how long it will take (and thus your timeouts).
Bill, everything you say makes sense, except that perhaps a key concept is missing. Most of our scripts and processes are in Python. If I have 20 such scripts idling in memory, waiting for something to happen (a dbus event?), the footprint is huge - clearly this does not scale. Even if they get paged out. Incron, otoh, is a single process, weighing 600K, and can have a config file listing 20 scripts that might be triggered by a (inotify) event. Make that 200 scripts. The memory footprint doesn't change. > You don't seem to like DBUS Why do you repeat that? I have no problem with DBus where it makes sense: processes that are guaranteed to be running in memory all the time. The key problem with paging out is that if you have a dozen idle processes in memory that are network daemons, and a dozen that are batch processes waiting for a dbus signal, the kernel will page them indiscriminately - the batch ones will be able to do their job ok when called, the network ones will timeout on their users. The kernel has no way of knowing. This isn't theoretical stuff -- rather very practical concerns of offering network services in real life. Memory usage matters. OTOH, I'm open to seeing facts that contradict my analysis - if you or anyone has a strategy that is better than incron - say, way to keep 20 to 200 python scripts in memory in 600K of (safely swappable) memory - I'd love to see it. cheers, m -- [EMAIL PROTECTED] -- School Server Architect - ask interesting questions - don't get distracted with shiny stuff - working code first - http://wiki.laptop.org/go/User:Martinlanghoff _______________________________________________ Server-devel mailing list Server-devel@lists.laptop.org http://lists.laptop.org/listinfo/server-devel