> > The next time the scheduler runs, it notes the packet in the input queue of > > ip_input (assuming it was an IP packet), and schedules this process to run. > > > > With fast switching, the CPU is interrupted, and the packet is actually > > switched at that time. > > Yup, that's how I understood it as well. The CPU must be interrupted in > all cases, because otherwise, how could it know a packet had arrived? > Unless you're doing distributed switching of some kind, of course; in > that case, the receive interrupt needn't be seen by the main CPU at all. > > > However, I wasn't able to glean an answer to the original question about > the > > second part of the statisitic when you do a show interface. Do you think > the > > second part (the interrupt part) is just refering to the second situation > > (switching the packet during the CPU interrupt)?
The generalization established in "that book" is: the more primitive the switching method (namely process switching-I'm not sure the generalization scales all-too-well), the more interrupts it uses. The same book claims that the second value includes all interrupts handled by the cpu during the 5 second interval in question. Presumably, a router passing a non-trivial amount of data would exhibit noticeably different ratios linking cpu usage devoted to interrupt processing with total cpu utilization when using process-based switching vs. instances where higher-end packet forwarding mechanisms are enabled. > > That's how I always understood it anyway. > > I'll take a peek at 'Inside Cisco IOS Software Architecture' when I'm at > work. > > Regards, > > Marco. Message Posted at: http://www.groupstudy.com/form/read.php?f=7&i=50094&t=49954 -------------------------------------------------- FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

