I'm having a bit of confusion about the role of output queues when dealing
with input queue drops as explained by Cisco guidelines on the matter:

http://www.cisco.com/univercd/cc/td/doc/cisintwk/itg_v1/tr1915.htm#xtocid4

I understand input drops occurring when the router is receiving incoming
packets faster than it can process them.

It first suggests increasing the output queue size on the common destination
interfaces for traffic arriving on the interface that is having input queue
drops.  I understand the reasoning behind this part of the solution although
the wording confused me at first.

However, it then suggests decreasing the size of the input queue to force
input queue drops to become output queue drops.  What relationship is there
between the input and output queues that would cause an input queue decrease
to result in output queue drops?  Isn't the output queue just used for
handing off packets to an outgoing physical interface buffer?  I would think
that decreasing the input queue size would simply result in more input queue
drops.

I was thinking it may be referring to output queue drops on an interface
other than the one experiencing input queue drops (destination interface of
the traffic) but I still can't visualize how an input queue decrease on one
interface would result in an increase in packet traffic to an outgoing
interface to the point it would cause output drops.  It seems to me it would
result in an increase of input queue drops on the incoming interface and
less traffic on the outgoing interface (due to more packets being dropped on
the inbound side).

Can someone shed some light on this for me?  Or point me in the right
direction?

Many thanks.

--
Jason


Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=66955&t=66955
--------------------------------------------------
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

Reply via email to