On Thu, May 15, 2014 at 12:57 AM, Carolyn Rowland <[email protected]> wrote:

> I would be interested in perspectives on this document if anyone takes the
> time to read the draft.
>

It is a large document, so I'm really just skimming it.  I have 2 comments:

*2.3.4 Security Risk Management *

This discusses ways to deal with risk: Avoid, Accept, Mitigate, Transfer.

It has a very old-fashioned view of risk.  That said, it would be good if
it could work in the fact that avoiding risk increases risk over time
because we get out of practice on how to handle the situation.  For example
I know a place where an outage happened during an upgrade. The developers
said, "ok, we'll upgrade as rarely as possible!".  The wise management
instead said, "No, we're going to require weekly releases until you can do
it without fail."  2-3 months later the upgrade process was mostly  and
everyone on the team had experience doing it.  The risk was future outages
was removed.

I realize that 2.3.4 is describing a different kind of risk, but I'm sure
there is some kind of analog can be found.


*Chapter 3: Lifecycle*

This section is very complete on the topics it covers but it skips the most
important part of a system's lifecycle: upgrades.

The Heartbleed event was a stark reminder that one of the most important
parts of security is the ability to confidently upgrade a system rapidly
and frequently.  When Heartbleed was announced, most people were faced with
the following fears:  Fear that an upgrade would break something. Fear that
it wasn't well-understood how to upgrade something.  Fear that the vendor
didn't have an upgrade for something.   How many people were told, "Gosh,
the last time we upgraded (that system) it didn't go well so we've been
avoiding it ever since!"

If an enemy really wanted to destroy the security of the systems that NIST
wants to protect, all he has to do is convince everyone to stop upgrading
the software.

This chapter is about CREATING CREATING CREATING (upgrading) AND DISPOSAL
of the system.  It should be about creating UPGRADING UPGRADING UPGRADING
and disposal of the system.

Upgrading a system doesn't happen by accident.  It requires planning from
the start.  Each of the 11 phases documented in Chapter 3 should encourage
making future upgrades seamless.  To be specific, upgrades should be rapid
(fast to happen once they've begun), frequent (happen periodically), and
promptly (low lead-time between when a vulnerability is published and when
the upgrade can start).

1.  Stakeholder  Requirements Definition: Should include "non-functional
requirements" (http://en.wikipedia.org/wiki/Non-functional_requirement)
including the ability to do upgrades rapidly, frequently, and promptly.
2.  Requirements Analysis: Should include measuring the
rapid/frequent/promptness of upgrades.
3.  Architectural Design: Some designs are easier to upgrade than others.
 For example SOA architectures are easier to upgrade than monolithic
systems.  Firmware is easier to upgrade than ROMs.
...
etc.
...

The section should also include a discussion of the "smaller batches"
principle.  If we do upgrades once a year, the changes in that upgrade is a
long, long list. (a large batch).  If something breaks, we do not know
which change caused the problem.  If we do upgrades frequently, the
"smaller batches" of changes means we can identify problems easier.
 Ideally an upgrade happens after each change, thus we can pinpoint the
problem immediately.  While this frequency may sound unrealistic, many
systems are now designed that way.  For example Etsy has documented their
success with this system (and other companies will soon be publishing
similar reports).

Related to my point about 2.3.4, upgrades are a risk that is mitigated by
NOT avoiding, but by doing it more frequently.  The smaller batches
principle demonstrates that, as does the fact that when people do upgrades
more frequently they develop skills around it, see more opportunities to
optimize the process, and generally automate the process.  Lastly it
reduces what I consider to be the single biggest security hole in any
system: lead time before a fix can be installed.


Carolyn, do you
Tom

-- 
Email: [email protected]    Work: [email protected]
Skype: YesThatTom
Blog:  http://EverythingSysadmin.com
_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to