I don’t have any 200x200 customers, but the 10 packet default size is 
definitely too small for the typical customer in the 5 to 20 Mbps range, I 
think 50 is a popular value.

50K packets sounds excessive even for 200M service.  I don’t think the stress 
on the route would be CPU but rather memory.  But the bigger negatives I think 
would be bufferbloat and possibly having an excessively long burst before rate 
limiting kicks in.  Perhaps something in the 500-1000 packet range would be a 
good middle ground?

Yes, I know it’s the political season and everyone is flirting with Ted Cruz or 
Bernie Sanders, but maybe settle on more of a John Kasich solution?


From: Sam Kirsch 
Sent: Friday, January 29, 2016 10:23 AM
To: [email protected] 
Subject: [AFMUG] Queue Types on Mikrotik

A number of months ago I was troubleshooting an issue we had on a customer with 
a very large pipe (200x200) that was having a lot of issues related to the rate 
limiting Queue we had put on their router.  Individuals were not seeing the 
full speed of their PCQ, and the overall connection would hit nearly 200 
download wit the queue off, but couldn't get above 120 with the queue on.  For 
a while I thought that was something weird with the PCQ until I started looking 
at the Queue Type values. I  thought the Queue-Type being set to the default or 
default-small setting was causing the issue.  I noticed just tons and tons of 
packets getting put into the 'dropped' counter for that Queue.  Based on some 
research I did it lead me to created a 'big-pipe' FIFO queue type with a 50,000 
packet buffer limit.  That queue has been running great since then.

The more customers we deploy with queues on Mikrotik devices the more I'm 
finding that these default Queue-Types are the issue.  It doesn't seem like its 
unique at all to 'big pipes'.  Those default-small and default queue types have 
a ridiculously small packet buffer.  When the queue stats say 'dropped', does 
that mean the packet is indeed dropped entire and needs to be retransmitted?   

It doesn't seem like my 50,000 packet buffer limit is stressing out the CPU on 
that one board.  I'm wondering if my solution is a viable one for being 
deployed a bit more en-masse (that is to say, perhaps a 20x20 or a 10x10 
doesn't need a 50,000 packet buffer, but more than 50 or 10), as I'm starting 
to think it will take care of a lot of queue related issues.  Is there a better 
solution?  Is our approach to Queues perhaps flawed to begin with, or am I on 
the right track?

Thanks!

Regards,

-- Samuel Kirsch, Network Support
Plexicomm - Internet Solutions | www.plexicomm.net
Office: 1.866.759.4678 x109 | Fax: 1.866.852.4688
Emergency Support: 1.866.759.9713 | [email protected]

Reply via email to