On Wed, 10 Nov 1999, Stewart Dean wrote:
> OK: I am seeing intermittent network saturation: internal pings fail,
> telnet session hang or get dropped, etc. I have no sniffer, no network
> analyzer, no network management software. This is an Ethernet
> network that was figer linked to IBM/Cabletron/Synoptics hubs, but
> now has a Cisco 5500 with RSM at its center and about 1/3 of the
> network is Cisco 2900 XLs..in a year or two, it'll be all of it. It handles
> about 1000 students and 500 faculty and staff. We have a T1
> outbound out of a Cisco 2501 (which ties to the intranet with a 10Mb
> regular Ethernet); it's other serial port is a frational T1 from a
> satellite campus.
You should be able to look at the interface traffic on your Cisco gear.
You may want to do a search for MRTG and enable *read-only* MIBs for SNMP
access to that gear while you're troubleshooting, then you can have
graphs for the internal network.
> I notice that, when network saturation happens, the T1-Out is
> pegged....the ISP, AppliedTheory/Nysernet, provides a nice web-
> based page that graphs our T1 usage. When I do a 90 day report, I
> see the first 30 days is flat at 10-15%. Then (perhaps coinceding with
> the beginning of replacing old stuff with 2900XL Cisco gear) I see the
> beginning of peaking, that grows over time. By this time, we are
> getting 100% T1 out for periods for hours...then it will break off and
> go down to 20-30% and ordinary usage resumes.
>
> About the only approach I've been able to come up with is:
> = scanning the 5500's show port for excessive errors and pulling the
> fiber to the problematic port. That hasn't yielded anything.
> = pulling the fibers to all switches/hubs one at a time and watching
> the CPU% of the Internet router. I observed a 10% drop on one fiber
> leading to a student dorm, but no great restoral of services.
You should be able to get this information from the switch itself, or put
a port in monitor mode and run a sniffer on one of your Solaris boxes.
> The floor is open. I appreciate your suggestions for:
> = debugging with what I've got
Snoop on Solaris is great for sniffing traffic. I wish it was available
for more OS', it rocks.
> = what hw/sw would work to help debugging
MRTG your traffic and look at the Cisco site or call Cisco for assistance
on the Catalyst. Unlike almost any other company these days, Cisco has
tech support reps with clue. Explain to them or your local Cisco office
that you've started seeing these problems during an implementation and
move to their gear, they'll most likely help you especially if your
equipment is on maintenance. There's more informaton on their site too,
but it's easier to start with a warm body and they should be able to help
you find things out pretty quickly or suggest a good course of action.
A bonus is that you can call any time of the day or night and get a
clueful person. Router/switch maintanance is cheap considering the
service levels, and you're paying it so grab the serial number off the
Catalyst and call them. They may also be able to help you plan some QoS
stuff that will keep the students within a bandwidth cage while you sort out a
long-term solution. (I hope none of my local Cisco salesdweebs read
that after my anti-QoS rant the other week)
Paul
-----------------------------------------------------------------------------
Paul D. Robertson "My statements in this message are personal opinions
[EMAIL PROTECTED] which may have no basis whatsoever in fact."
PSB#9280
-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]