On Mon, Sep 21, 2009 at 2:37 AM, q6Yr7e0o nIJDVMjC
<u9oqc...@googlemail.com> wrote:
> Hi,
>
>
> what's the best way to limit the download bandwidth of an libevent
> based application. Situation is that i have an incoming evbuffer from
> one client and the application forwards this stream to an outgoing
> evbuffer on another client.
>
> If the incoming client has huge bandwidth and the outgoing one has
> only very limited bandwith my application wastes gigs of memory for
> buffering. Is it ok to just not read from the incoming evbuffer? Will
> this drop packages from the incoming stream and tell the incoming
> client to send again (connection is tcp)?

Hello,

Yes you can just delay the read and it will not drop packets. The
bufferevents take care of all the raw tcp buffering.

But I was also wondering about this the other day. We have like dialup
users connected to 10gbit backend servers. Buffers filling up is not
really my main issue, but i somehow need to rate-limit (read: traffic
shape) my users. With a single-user fork/select based daemon, rate
limiting is just a matter of adding a sleep() here and there. With an
event based daemon i think i have to delay my reads with a timer event
before adding the data to a client evbuffer. I dont know if adding
timer events for each read (or write) will cause much of a performance
impact though.

Since i was just porting my fork/select based app to libevent i would
like to hear other rate limiting and traffic shaping strategies from
other people. Any pointers to general traffic shaping theories are
well appreciated too :)

Cheers,
Tommy
_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users

Reply via email to