Measuring the success of a security posture is difficult at best because of the 
desired results: nothing.  It is relatively plain to security folks that, if the 
firewall is doing its job and the AV tools are disallowing all the bad things, and 
day-to-day work is minimally affected by all our protections, then everything is 
working as it should.

But to management, it isn't always that clear.  Often times you have to resort to 
using their math.  For example, to show the effectiveness of a firewall or AV filter, 
I would count the number of access attempts or infected emails and compare that to the 
number of successful blocks and present a percentage of effectiveness.  I would also 
show when the AV and firewall rules were updated, maybe in terms of the number of new 
rules or known virii, as well as how often updates are made.

Another thing to use is your training plan.  We all know that employee (read system 
user) training is just as important as all the hardware and software measures we use.  
So, show stats on how many employees have been trained to date and your rate of 
completion (for this year, of course).

Sometimes, when they ask for expenditure justification, it can be difficult unless you 
can compare it to insurance premiums.  Although the results of a good security plan 
isn't as tangible as production line output or keystrokes, the cost of reacting to 
something like nimda can include repairs to the computers involved, downtime for every 
employee affected, and loss of productivity.


-----Original Message-----
From: Led Slinger 
Sent: Friday, June 07, 2002 09:01
To: [EMAIL PROTECTED]
Subject: Security Program Targets


I wanted to throw this question out to a broad range of security 
professionals because I have been struggling with this for quite some 
time.  The question is simple, but the answers elude me.  How does one 
measure the success of a security program?  I find it relatively simple 
to identify a risk and mitigate it using technology, but when corporate 
culture and business 'needs' butt heads with security requirements, I 
find myself losing more often than not.  Simple things such as DMZ 
environments versus punch throughs to forcing patches on developers.  
They are quite simple to understand and to implement, and the cost is 
not a factor, it's plain and simple 'Time is money'.  But rarely does 
the 'Time is Money' come into play when rebuilding a box due to NIMDA 
or some other tragedy du jour.  OK, that's mostly bitchin about life, 
but where I'm trying to go with this is; If you develop a sound 
security program, implement it both tactically and strategically, how 
do you really measure its success?  The number of incidents may go 
down, but even with a solid plan, the sheer number of new exploits and 
the fast rate of virus propagation may make the incident numbers go 
up.  This really isn't a measure of success or failure in my book.  Any 
suggestions, recommendation or generally information would be 
tremendously helpful!

Cheers,

Leds!

-- 
There's nothing wrong with Windows until you install it........

Reply via email to