A number of months ago I was troubleshooting an issue we had on a customer with a very large pipe (200x200) that was having a lot of issues related to the rate limiting Queue we had put on their router. Individuals were not seeing the full speed of their PCQ, and the overall connection would hit nearly 200 download wit the queue off, but couldn't get above 120 with the queue on. For a while I thought that was something weird with the PCQ until I started looking at the Queue Type values. I thought the Queue-Type being set to the default or default-small setting was causing the issue. I noticed just tons and tons of packets getting put into the 'dropped' counter for that Queue. Based on some research I did it lead me to created a 'big-pipe' FIFO queue type with a 50,000 packet buffer limit. That queue has been running great since then.

The more customers we deploy with queues on Mikrotik devices the more I'm finding that these default Queue-Types are the issue. It doesn't seem like its unique at all to 'big pipes'. Those default-small and default queue types have a ridiculously small packet buffer. When the queue stats say 'dropped', does that mean the packet is indeed dropped entire and needs to be retransmitted?

It doesn't seem like my 50,000 packet buffer limit is stressing out the CPU on that one board. I'm wondering if my solution is a viable one for being deployed a bit more en-masse (that is to say, perhaps a 20x20 or a 10x10 doesn't need a 50,000 packet buffer, but more than 50 or 10), as I'm starting to think it will take care of a lot of queue related issues. Is there a better solution? Is our approach to Queues perhaps flawed to begin with, or am I on the right track?

Thanks!

Regards,

-- Samuel Kirsch, Network Support
Plexicomm - Internet Solutions | www.plexicomm.net
Office: 1.866.759.4678 x109 | Fax: 1.866.852.4688
Emergency Support: 1.866.759.9713 | [email protected]

Reply via email to