> i have acheived restrictinng both in&out trafic using imq0.. i have
> marked the packets on different ineterface, hence sending them to the
> rules i want & then used **FORWARD** to imq .!.. it works pretty good,
> though done in a test bed of 4 ip.. i want to scale it to our running
> linux box handling about 250 ip's with 1.5mbps link..
> the question now i have start thinking, after Tobias Geiger'smail is -->
> what will be the cpu overhead when the linix box also runs squid in it..
> withh htb3 & imq show the same result as shown in the test ?
> i hope & feel it will .. :)
I think the CPU is not so important. I think there are other problems with
shaping incoming bandwidth with imq. First of all, you can create an extra
queue and so extra delays. But using a "shared" structure to manage incoming
problems can also be a problem. Imagine a setup where 100 kbps is split in 2
parts of 50 and they can borrow from each other. One class is empty and one
class is filled. When there is suddenly a burst in the empty class of 50
kbps, it will take some time before the traffic in the full class will
throttle down to 50kbps. And don't forget the extra delay introduced by the
imq device, so the response will even be slower. It's better to be sure the
50kbps is available for the bursty traffic. Of cours, you waste some
bandwidth, but a few kbps is enough to make telnet more responsive.
So you can do some shaping on incoming traffic, but bursty traffic is not so
easy to mange.
To be honest, I just start reading and thinking about shaping incoming
traffic, so any suggestions are welcome.
Stef
--
[EMAIL PROTECTED]
"Using Linux as bandwidth manager"
http://www.docum.org/
#lartc @ irc.oftc.net
_______________________________________________
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/