Weekly is far too frequent for most environments. Unless a very particular threat has been exposed, this is overkill, and even getting quarterly patching schedules can be difficult. Rebooting weekly is out of the question on many of our servers. In some environments, rebooting more than once per year takes very-senior-management approval!
Also, if you require proper testing and burn-in of patches in your non-production environment first, (which is a very good idea if you want to keep your systems stable), then you will spend all of your time testing and installing patches, and always be several weeks behind anyway - testing patches includes letting them run for at least a week, and preferably a month, before confirming their efficacy and impact. Even our 'exposed' machines are patched quarterly-or-less - they are also configured to minimise external threats such that security-bug-of-the-week is less likely to be exploitable on these machines in the first place. Good security policies help minimise the risk of OS security bugs, but occasionally a patch does have to be rushed through to combat some known-deadly vulnerability. - Richard Robert Brockway wrote: > Hi Bryce. > > On Wed, 13 Jan 2010, Bryce T. Pier wrote: > > >> For years my employer has only patched *nix systems on an annual basis. >> We've now been directed to apply security patches quarterly. Due to the >> > > Quarterly?!?! If the systems are potentially exposed to threats that is > far too infrequent. Show me a system that isn't potentially exposed to > any threats (it would need to be disconnected from the network for a > start). > > In theory this is all about cost/benefit analysis. In practice you need > to patch regularly. > > One decent option is: Regularly patching done weekly with urgent patches > applied as required. Keep an eye on security lists. > > >> infrequency of patching in the past, there has developed a fairly high >> level of paranoia around patching "breaking" things, particularly >> > > More regular patching reduces the chance of breakage as you have broken > the problem down to manageable blocks. > > >> related to servers not coming back from the post-patch reboot. To >> mitigate these fears I've been asked to document procedures for >> backing-out the applied patches and/or recovering the server in the >> event of one not coming back up. >> >> Given that tools like RHN Satellite or Novell Zenworks don't have the >> ability to do extensive pre-patch preparations like breaking hardware >> root mirrors or running filesystem dumps, I have the impression that at >> least in enterprise Linuxes there aren't frequently issues caused by >> normal, regular patching activities. >> >> > > >> So I'm curious what other people are doing on the Linux platform. >> > > I've long had a preference for Debian. It's practice of back porting > patches set the standard. In the last 10 years I've hardly had an > incident where a Debian package update caused any problems at all. A lot > of other distros have adopted these stratgies too but for platform > stability I still prefer Debian. > > If you have boxes in prod you should hopefully have an equivalent > environment setup for qa/dec (or at least as close as you can manage it). > Apply patches to dev then qa and only then to prod if it passes all tests. > > When it is time to go to prod you can apply the patch to a subnet of the > servers, rolling the patches out in phases. Eg, if you have 12 servers > and can run properly with a minimum of 10 servers then you could apply the > patches 1 or 2 servers at a time. Thus even if there is a failure you are > still fully operational (unless a 2nd problem occurs). > > Using a COW or log structured filesystem you could just roll back the > filesystem. This isn't feasible on Linux yet but soon will be. Other > Unixen have this operational now. > > Run a monitoring service. As well as general service failures it will > alert you if one of your updates went south and you hadn't otherwise > noticed. > > >> Do you use root disk mirrors and break the mirror prior to patching? >> > > I prefer hardware raid. The problem just 'goes away'. H/w raid is cheap > these days. > > When using s/w raid I do not break mirrors before updating. > > >> Do you utilize filesystem dumps (dumpe2fs, etc) or rely on enterprise >> backups of the OS filesystems? >> > > It depends on the volume of the data. I think you really need to go for > an enterprise backup solution once the amount of data goes very large (10s > of TB). OTOH I think a lot of enterprise backup solutions are overly > complicated. I wonder if the people who designed them have ever actually > done a DR in real life. I have and the answer is to keep it simple. > > Here are the notes for a talk on backups I've done a few times: > > http://www.timetraveller.org/talks/backup_talk.pdf > > I'm moving all that info over to http://practicalsysadmin.com. > > >> Do you use rpm rollbacks? >> Rebuild / re-image the server if there are problems? >> > > I'd rollback if possible and rebuild/reimage if I had to. In general. > > Rob > > _______________________________________________ Discuss mailing list [email protected] http://lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by the League of Professional System Administrators http://lopsa.org/
