Since we were discussing fast switching earlier I thought I'd post something I found on CCO that was really interesting. At least I think it's interesting, and it would be good to understand this for troubleshooting purposes. Or troubleshooting porpoises. Either one. :-) Packet Routing When Using Both Fast Switching and Process Switching Question: I have four equal cost parallel paths to the same destination. I am doing fast switching on two links and process switching on the other two. How will the packets be routed in this situation? Answer: Assuming that there are four equal cost paths to some set of IP networks, with interfaces one and two fast switching, and three and four not, the router will: Establish the four equal cost paths in a list. Call them path 1, 2, 3, and 4. When you do a show ip route x.x.x.x, the four "next hops" to "x.x.x.x" will display. Start with the pointer, called the "interface_pointer" on interface 1. The "interface_pointer" cycles through the interfaces in some orderly fashion, such as 1-2-3-4-1-2-3-4-1, and so on. The output of show ip route x.x.x.x will include a "*" to the left of the "next hop" that the "interface_pointer" will use for a destination address not found in the cache. Each time that the "interface_ pointer" is used, it advances to the next interface. To illustrate this, repeat the following loop: A packet comes in, destined for a network serviced by the four parallel paths. Check to see if it is in the cache. (The cache starts off empty.) If it is in the cache, send it to the interface stored in the cache. Otherwise, send it to the interface where the "interface_pointer" is and move the "interface_pointer" to the next interface in the list. If the interface over which we just sent the packet is running route-cache, populate the cache with that interface id and the destination IP address. Over time, the interfaces running route-cache will carry all the traffic except destinations not in the cache. In the case of two route-cache and two non-route cache, there is a fifty percent chance that an uncached entry will hit an interface that caches entries, thereby nailing that destination to that interface. Also, if no interface is running route-cache, the traffic will round-robin on a packet-by-packet basis. The result is that either all have route-cache or no route-cache on all interfaces in parallel paths, or expect that the interfaces with caching enabled will carry all of the traffic over time. Message Posted at: http://www.groupstudy.com/form/read.php?f=7&i=8255&t=8255 -------------------------------------------------- FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

