I've written a fairly extensive set of libraries and applications that are based on libevent.

Because of the way I've implemented the event based solution to handling an internal message protocol we send over a socket, I find my application using 100% of the CPU dealing with data coming in over the various sockets.

Currently I've implemented this by setting an EV_READ event for a new file descriptor.  Because of the way I'm handling our protocol when I get a read event on the fd, I only read a portion of the data that may be available to be read, and then reschedule the event, (thinking I would minimize starvation of other events)

but since there is still data to read the is readable fires again(?) and as a result given the 100's of connections and 50 msg/sec data rates I have, the process chews through CPU dealing with the constant state of isReadable.

Soooo, while I'm using bufferevent_write()  I had never switched to the read stuff over.  Now I think I have to...  I'm looking for guidance...

it seems that I will be doing away with my EV_READ events for new fd's and switching to some mechanism based on the read callback for the bufferevent...

Am I heading in the right direction?  Is there something I'm missing??

Thanks for any insight that can be provided given the minimal details I've provided.

Morgan Jones


See the all-new, redesigned Yahoo.com. Check it out.
_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to