Hello,
I'm experiencing problem with memory freeing in libevent (1.4.3 stable).
I've implemented proxy server
using libevent http functions. Everything works OK as long as the
workload isn't heavy - then memory
allocations grows rapidly. Memory leak detection tool (I use memoryscape
from
I have an event that gets created with EV_READ | EV_PERSIST, and I'm
considering different ways to implement read timeouts. Passing a timeout
value to event_add() is the obvious approach, but the docs don't make it
clear what behavior to expect...
The event in the ev argument must be already
I needed to do apples-to-apples comparison between rtsignals and epoll
for a client, so I fixed rtsig.c from 1.3e (see an earlier post) to
compile, then fixed it to work, and ported that to 1.4.3-stable.
NOTE WELL! Used only for benchmarking: the rtsig_dealloc routine may
not free everything it
I'm having trouble getting timeouts to work. According to the docs,
libevent can assign timeout events to file descriptors that are triggered
whenever a certain amount of time has passed with no activity on a file
descriptor.
When I create a socket read event and pass a {12, 0} timeout to
On Mon, May 12, 2008 at 5:50 PM, Forest [EMAIL PROTECTED] wrote:
I'm calling event_set() with EV_PERSIST, and building against libevent
1.3b on linux. Are timeouts known to be broken with either of these?
The behavior of timeouts with EV_PERSIST is not well documented and
somewhat counter