Ars 
<http://arstechnica.com/journals/microsoft.ars/2008/07/11/microsoft-vista-more-secure-than-xp-leopard-and-linux>
  has their panties in a twist because Kevin Turner dared to suggest that vista 
is more secure than OS X or Linux. The thing is, from one point of view, Kevin 
is absolutely right, but from other points of view he is wrong, which just 
points out the general failing that keeps popping up among security folks: How 
on earth do you measure security?

When talking about security there isn’t a clean metric.  You have

Vulnerability: Software (for this purpose) weakness that provides a door in
Threat: danger that someone will find the vulnerability
Risk: the likelihood and cost of the vulnerability being exploited
Exposure: time that a vulnerability remains

Companies/orgs love to cherry pick which one they go off of.  Firefox is big on 
touting their low exposure while being horribly quiet about the number of 
vulnerabilities.  Microsoft is big about vulnerability (and to some extent 
exposure, at least on the OS level) count, since they do have a much lower 
count then competitors that don’t have a decent secure development lifecycle 
(which would be Apple and most of the OSS projects), but they don’t like 
mentioning risk because they have more risk than freaking anyone.

The problem is, all of these approaches are at least a little right.  Sure the 
risk is huge on windows as a whole, but if you install patches quickly your 
risk is actually pretty low, and with a lower vulnerability count there is less 
likelihood that someone will release a zero day for a vulnerability that isn’t 
patched. Mozilla is also right about firefox being more secure, as the fast 
patch and deployment rate means that zero day isn’t going to remain a concern 
long (except that they have been a little too quick pushing patches and have 
regressed things on a number of occassions), but their higher vulnerability 
count does mean zero days are going to crop up more often.

The only stance that isn’t right is one that throws out vulnerability count as 
a metric because of “more eyes” (which I have complained about before). Being 
open source may mean that more good guys have access to code reviews, but it 
also means more bad guys can get a leg up looking for flaws. The number of 
known vulnerabilities is ALWAYS a problem regardless of why they are there 
(incidnetally, again, that more eyes arguement is crap. Code reviews are a 
horribly inefficient way of finding flaws, and you never know when some poorly 
trained reviewer will comment out the functionality that actually makes your 
random function random).

So if all of these are a little right, what is the problem? I think it is 
really twofold. First, it makes it very hard to make an informed comparison of 
two platforms in terms of security and best fit for your org. Second, it makes 
it hard to gauge your own security and communicate that to people in and out of 
your organization. Both are pretty important, and a common, informative, metric 
benefits everyone (except those that look bad because of it).

Thinking about it I like:

[Ʃ (CriticalityModifier * Exposure)] / (nTotal * 10) 

or, in English, the sum of Criticality of each vulnerability in the past twelve 
months * the number of days that the vulnerability was unpatched (patches that 
get rolled back don’t count), then divided by the total number of 
vulnerabilities times ten, to scale down the number a bit.

The criticality modifier is 0.5 for low, 1 for moderate, 2 for high, 3 for 
critical. As an example, if a hypothetical product had the following 
vulnerabilities over the past twelve months: 


Criticality

Exposure in Days


Critical

45


Critical

35


Critical

4


high

20


moderate

45


moderate

50


low

145


low

75

would give you (3*45 + 3*35 +3*4 +2*20 + 1*45 + 1*50 + 0.5*145 + 0.5*75) / (8 * 
10) = 6.15 Security Rating.

This seems like a fair metric, as it measures a level of impact (the 
criticality, though this isn’t horribly quantitative) of the flaw, the days of 
exposure to the flaw, and the number of total flaws. What it does not take into 
account is likelihood, as I can’t come up with a good quasi-quantitive means to 
measure said likelihood (open to suggestions here).

Thoughts?

~ Joshbw

 

 

[Ph4nt0m] <http://www.ph4nt0m.org/>  

[Ph4nt0m Security Team]

                   <http://blog.ph4nt0m.org/> [EMAIL PROTECTED]

          Email:  [EMAIL PROTECTED]

          PingMe:  
<http://cn.pingme.messenger.yahoo.com/webchat/ajax_webchat.php?yid=hanqin_wuhq&sig=9ae1bbb1ae99009d8859e88e899ab2d1c2a17724>
 

          === V3ry G00d, V3ry Str0ng ===

          === Ultim4te H4cking ===

          === XPLOITZ ! ===

          === #_# ===

#If you brave,there is nothing you cannot achieve.#

 

 


--~--~---------~--~----~------------~-------~--~----~
 要向邮件组发送邮件,请发到 [email protected]
 要退订此邮件,请发邮件至 [EMAIL PROTECTED]
-~----------~----~----~----~------~----~------~--~---

<<inline: image001.gif>>

回复