Re: [SC-L] ddj: beyond the badnessometer

2006-07-14 Thread Gadi Evron
On Fri, 14 Jul 2006, Daniele Muscetta wrote:
 On 7/13/06, Gary McGraw [EMAIL PROTECTED] wrote:
  3) never use the results of a pen test as a punch list to attain
 You are right, but very sadly, that's how it gets used by a lot of
 hey, the pen testers found problem 1, 2, 3 - we fix those, we are fine. No
 way. But still I've seen this done in a lot of places

Gary is correct on many issues, except for one:
pen-testing is NOT black-box testing. Black-box testing is comparable to
White-box testing in parameters of quantification.

How the client deals with the results is unrelated to the type of
results. It's directly linked to why they ordered the test and how they
treat security.



Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

RE: [SC-L] ddj: beyond the badnessometer

2006-07-14 Thread Arian J. Evans
Great stuff Nash. To re-iterate one important statement: Many orgs
today will *only* respond to a working exploit. (I'm not sure what
the sample (%clue) of orgs I see is vs. Cigital's client, but...)

Pen-test vs. code review, black-box, white-box, whatever:

There is absolutely no difference at the end of the day in terms of
*verification* between finding SQL injection or susceptibility to
XSS attacks via black box attacks or fully white human eyeball
source code review.

Commission, Omission, transmission, no difference. It's verification.

The pen test *can* make an acceptable 'punch card'. Depends on how
the punch card results are written:

+ Does the punch card simply list the top 10 XSS'able parameters?

+ Or does it provide a finding covering both/either:

- design Omission


- implementation Commission/failure to

encode output appropriately for the User Agent(s) specified?

The rabbit hole can, of course, go far more usefully deeper in
terms of problem resolution with source code review. I think
that's the worst sentence I've ever written. Today, maybe for
the last = 1.5 years, I get but what library should I use
to accomplish this with [insert_framework]? instead of uh,
what's output encoding? or input validation is too hard/slow.

Assertion Falsification can be a tail-chaser. That's why you
add in business context. I think you'll find that many good pen
testers sit down with the business and help them define security
goals for the application, e.g.- Rob must never get Sally's report
or Sally's report must remain sacred at all points, in transit,
storage, access, etc.

Commonly this is achieved through threat modeling, which turns
pen testing into a verification mechanism.

Ultimately the folks on this list are pretty smart and I'd wonder
why/if this dialogue is needed, except that a recent discussion
opened my eyes a bit to approaches I thought were doornail dead.

A fairly large consultancy with a practice focused on application
security contacted me earlier this year and in the course of
discussing their approach to appsec I asked, but when do you
talk to the business and when do you work with the developers?
and their response was What?

After repeating the question I got told Oh, no, you won't
talk to those folks or get to see their documentation; we're
security professionals, not developers.

So evidently there is still a market for high-dollar, completely
blind pen tests of apps with zero business context and no
understanding of architecture, dev/design goals, etc.


That's what I'm guessing Gary means, and surely that sun is
slowly setting.


p.s. - Nash, when I first read your post, I thought p2 started
with Pen tests are highly addictive. Then I re-read.

 -Original Message-
 [mailto:[EMAIL PROTECTED] On Behalf Of Nash
 Sent: Thursday, July 13, 2006 9:18 AM
 To: Gary McGraw
 Cc: Secure Coding Mailing List
 Subject: Re: [SC-L] ddj: beyond the badnessometer
 On Thu, Jul 13, 2006 at 07:56:16AM -0400, Gary McGraw wrote:
  Is penetration testing good or bad?
 Test coverage is an issue that penetration testers have to deal with,
 without a doubt. Pen-tests can never test every possible attack
 vector, which means that pen-tests can not always falsify a security
 Ok. But... 
 First, pen-testers are highly active. The really good ones spend alot
 of time in the hacker community keeping up with the latest attack
 types, methods, and tools. Hence, the relevance of the test coverage
 you get from a skilled pen-tester is actually quite good. In addition,
 the tests run are similar to real attacks you're likely to see in the
 wild. Also, pen-testsing is often intelligent, focused, and highly
 motivated. After all, how would you like to have to go back to your
 customer with a blank report? And, the recommendations you get can be
 quite good because pen-testers tend to think about the entire
 deployment environment, instead of just the code. So, they can help
 you use technologies you already have to fix problems instead of
 having to write lots and lots of new code to fix them. All of these
 make pen-testing a valuable exercise for software environments to go
 Second, every software application in deployment has an associated
 level of acceptable risk. In many cases, the level of acceptable risk
 is low enough that penetration testing provides all the verificaton
 capabilities needed. In some cases, the level of acceptable risk is
 really low and even pen-testing is overkill. I do mostly code review
 work these days, but I find that pen-testing has more general
 applicability to my customers. There are exceptions, but not that
 Third, pen-tests also have real business advantages that don't
 directly address risk mitigation. Pen-test reports are typically more
 down to earth. That is, they can be read more easily and the attacks
 can be demonstrated more easily to