I'm having a troubling problem. And I'm trying to decide whether I'm chasing a ghost or a real problem. I'm running libev 3.49 on OS X Leopard 10.5.7. I use libev in both my server and client. The server reads 1-2 k bytes and returns ~100 bytes within 2-3 ms. The client which connected, set async i/o, sent the bytes, etc., uses libev with an io-watcher for EV_READ on its socket and a timeout-callback. The io-watcher actually has a higher priority. 99% of the time everything works just fine.
In failure, the server sends its response 2-3ms later (according to its logs) but the client does not see it in 50ms and my timeout-callback is triggered. I've gone as far as setting libev's flags to use poll and modifying libev to dump out what it is calling POLL on -- verifying that it is basically calling poll(1, (<myfd>, 0x1, ...), 50ms). (The problem happens with select() too, but its parameters are a bit more difficult to log.) My sense is that I'm seeing some issue in OS X's loopback driver, but this is a very important piece of code and if anybody has some suggestions one way or the other, I'd be very appreciative. Cheers, Eric
_______________________________________________ libev mailing list [email protected] http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev
