On Thu, 22 Nov 2001, Loren Wagner wrote:

> I'm curious... when those on the list bring up timely application of fixes
> what is considered timely?  And, what is the balance based on the number of
> devices one might have?

It depends on your tolerance for risk.  Generally, patching once a quarter
will keep you out of trouble on most of the issues.  

It's somewhat like the doctor saying that "The chance of someone having
this is 2% overall, but in your case it's 100%."  Very few sites get
attacked with most things prior to a patch being available, but it does
happen.  Probably for the bulk of things, there's at least 90-120 day
window until things get to be script kiddie or worm material.  Depending
on redundancy and test cycles, it can take 2-4 weeks to test fixes.  Some
worms this year have been in the ~45 day window, but most have exploited
supercritical things like nameservers, where the patch time should be
accelerated.

30 days is probably as good as it gets, but if you assume that you're in a
typical company that turns on all the management garbage, adds IP
addresses to your switches, etc. then you're going to pay heavily for
doing maintenance on such devices to keep them up to date.  If you turn
off all the CDP/Spanning tree stuff, don't load IP, etc. then switches are
appliance-like and only need bugfixes.

Probably the best thing to do is to schedule maintenance based on
criticality- critical things should fall into the "immediate" category if
you've done a fair job of architecting things, some set should fall
into the 30 day window, and most things should be in the 60 or 90 day
cycle.  If nobody's looked at a machine/router/whatever in 6 months,
you're probably going to have issues.  6 months is probably as long as you
want to go ever if you haven't done a specific assessment for a machine.

Increased maintenance will also cause downtime issues, so you should
definitely engineer for redundancy and resiliancy in either case.  Keeping
up on desktops is always no fun- that's where it's good to get up to date
with scripting, or provide some protective measures that don't need as
much effort (if a personal firewall and AV cycle can be longer than a
desktop OS cycle, then they pay for themselves.)

Generic guidelines should be able to be superceded by component-specific
risk assessments.  For instance, updating exposed services on a machine
may supercede the need to update the machine- so if there aren't any
issues with say Apache, SSH and the OS, and that's all that's running on a
Web server, you should be able to skip a cycle (assuming no unpriv.
users, coders, etc.)  That's why taking the time to build strong systems
instead of trying to patch and update your way into them should pay off in
the long run (assuming that doesn't kill time-to-market.)  I can do "on
issue" maintenance of machines that I'm confident are configured
conservatively and are running mostly trustworthy software (see- in this
view immediate maintenace may come less often than 30 day maintenance.)

I never balance based on number, only on criticality.  If there's too much
stuff to manage, than that part of the business needs to be expanded, or
the ammount of stuff needs to be shrunk, or you need to look again at
device management processes.

Some things are counterintuitive though- like turning on SNMP and adding a
"management suite" may increase the number of devices which need to be
maintained on a strict cycle because it makes things like switches layer 3
addressable.  Making management harder, not easier.

Just my 2 cents,

Paul
-----------------------------------------------------------------------------
Paul D. Robertson      "My statements in this message are personal opinions
[EMAIL PROTECTED]      which may have no basis whatsoever in fact."

_______________________________________________
Firewalls mailing list
[EMAIL PROTECTED]
http://lists.gnac.net/mailman/listinfo/firewalls

Reply via email to