On Tue, 11 Sep 2001, Ben Nagy wrote:

> When I review people's organisational security, and especially when
> I've actually set up some or all of the infrastructure, I do my own
> verification. However, afterwards I also like to advise them to begin
> a regular "blind" audit regime, often using a direct and hostile
> competitor (it's not you-scratch-my-back arrangement, the other guys

While indepenent verification is a good thing, I'm not sure that I see
significant value in this- if you've done the right things with the right 
stuff, then the blind test should fail, and if you haven't, then you
should fail.   I suppose I just haven't seen enough instances where
architectural goodness and solid defense in depth haven't lead pretty
directly to shrinking the risk pool down to something that's easy to make
"go/no go" choices about while providing some quantification of the level
of explosure to a threat from a particular vector.  

It's either that, or you're more worried about threats from vectors that I
dismiss as unlikely ;)

> _want_ the accounts). I think that there is a point where one if too
> close to the architecture, and teeny little problems start to look
> like major truck-sized holes. For that reason, blind pen-tests are
> useful (IMO, of course) for drawing a line that says "You Must Be This
> Smart to Hack This Network". Pentesters think like hackers - they do
> the same stuff, run the same sort of probes and scans and miss the
> same flaws.

I suppose I've never bought into the "think like hackers" thing because I
think what's missed is normally the hard part- looking at what typical
attackers are using tends to be relatively easy these days, and securing
from those threats isn't all that difficult.  

To me the difficult part is validation that the operational folks are
doing what they need to do when they need to do it, and I'd rather come up
with a verification test that's indicative of say keeping BIND upgraded
than have someone try to figure out if that same old TSIG bug is in the
next release.

> I did note the point about companies that place restrictions on the
> penetration testers, or rule out certain classes of attack. That's Just
> Dumb, and removes pretty much _all_ value from the process.

Yet it's what happens extremely frequently in the real world.  Hell, you
should see some of the crap I've seen to just get a timeslot to connect
scan a DMZ!

> [...]
> > I'm curious- how does a blind test hold value versus a more 
> > open test? The only thing I can see (and trust me- I see it 
> > pretty clearly) is that it's significantly less expensive to 
> > implement for _most_ cases.
> 
> Ok, looking at it that way, while I'm pretty confident that having a third
> party audit your work is a Good Thing for the customer (although
> commercially risky for the vendor) I can't see any disadvantage from a
> security POV to making the 3rd party test open as well. I guess it could
> lead to risk inflation and thus over engineering the site security, since
> one would expect most risks to be exposed in that kind of scenario.

I suppose that's true, though I'm all for too much security over too
little- but if we assume that the first job is to meet the requirements,
then simply scoping the second to the same requirements should hold it in
shouldn't it?

 
> > > > Also, do you find vulnerability testing (like running nessus) 
> > to be about the same as pentesting in your definition, or 
> > more or less valuable?
> 
> Much less. I like to look at the OSSTMM[1] as a template for a penetration
> test. I consider what you call vulnerability testing to be a strict subset
> of a pen-test.

Right, I'm trying to quantify the value you perceive in the delta between
them.  To me, the value is pretty small, and given the right vulerability
range to scan, could be statistically insignificant.  I think in my
experience that I've found the extra things in a pen-test to be in the 2%
range of things to worry about much, much, much later because we need to
get the other 98% nailed category.

> > > need to be perfect - one just needs to know quite accurately how 
> > > imperfect they are.
> > 
> > I'm not sure you can know that accurately when blind.  That's 
> > actually probably my biggest problem with blind tests- the 
> > tester doesn't get to see the configuration file that could 
> > contain the backdoor from hell.[...]
> 
> I don't see that blind pen-testing can help much with a priori risk
> assessment. What it _can_ help with is verification of the risk levels
> that have been assigned to various problems. If you had "Hack the
> Webserver" as "Trivial" and the pen-testers couldn't do it, either
> they're morons or you overestimated the risk.

Or they aren't aware of the risk- which can be true of non-morons- I
suppose I just think that spot testing is of fairly limited value to most
organizations because they've got worse problems in getting process and
procedure down.  For organizations that get that part right I suppose
there's some value to spot testing, though I'm just not sure that more and
better can't be done with configuration validation than pen-tests.

> > Typically audits fall more on the procedure part and 
> > penetration tests and vulnerability scans fall into a 
> > slightly different category which I tend to think of as 
> > "poking sticks through known holes."  They're only part of 
> > what should be the bigger picture, and they don't include 
> > architecture reviews that can show what sorts of things might 
> > let those stick through.
> 
> OK, I like this bit. You're pointing at a hole between process audits
> and pentests, where there should be something like a "best practice
> audit". This won't be covered by a pen-test, because that just tells
> you about what your security is, and not about if you did it the right
> way. While some people may believe that the final outcome is all
> that's important (and there's an argument for that) I like to make
> design recommendations based on General Solution Goodness. I get lots
> of people asking "What extra security to you get doing it your way?"
> and I have to say "None that I can think of right now - but that's the
> point. I know it's more correct, so in general it will be more
> secure.".

I too get that quite a bit (along with "If the rule is in my firewall, why
the heck does it need to be in my router too?"- and if it's not for
defense in depth, I can normally come up with either scalability or
flexibility reasons.  I think that the final outcome scenerio is minimized
by history- we've seen so many bugs uncovered that going with the right
and wrong design helps significantly with keeping up against unknown bugs,
which of course, blind testing can't help with.

> I think I'm also now convinced that your definition of "Verification
> Testing" above makes lots of sense for maximising system security. I'm
> still not convinced that blind pen-testing isn't useful for refining
> the _current_ real-world accuracy of risk assessments.

I suppose part of that is because I'm in a company where risk assessment
is done based on actual attacks, not potential vulnerabilities, and
there's a model that includes continuous alerting and ad hoc testing when
necessary.  Though the argument I just made flies hard into fire and
forget security too, so I'm not sure that threat levels can't be gotten
outside of vulnerability exploit attempts.  I need to think some more
about some of this though.  Right now I'm leaning to thinking that the
less realtime threat analysis you have, the more you need non-blind
testing though.  

Paul
-----------------------------------------------------------------------------
Paul D. Robertson      "My statements in this message are personal opinions
[EMAIL PROTECTED]      which may have no basis whatsoever in fact."

_______________________________________________
Firewalls mailing list
[EMAIL PROTECTED]
http://lists.gnac.net/mailman/listinfo/firewalls

Reply via email to