>I think anyone who has experience with deep dynamic testing knows they
>need automation tools with custom configuration ability, the ability to
>record workflow, a framework to create custom tests, etc.
Absolutely. But Arian there are differing deployment models. You don't just
touch an appl
Steve,
I definitely agree that not using the tools were a big limitation -- especially
because the web interface wasn't as interactive and powerful as tool GUIs.
But for me, we had a hard time with using a consistent and actually, meaningful
scoring:
- What is a false-positive?
- How importa
On Wed, 5 Aug 2009, Romain Gaucher wrote:
> But for me, we had a hard time with using a consistent and actually,
> meaningful scoring:
> - What is a false-positive?
> - How important is this particular finding?
For those on this list, I cover these in some detail in my paper within
the NIST do
On Tue, 4 Aug 2009, Chris Wysopal wrote:
> As a group of security practitioners it is amazing to me that we don't
> have more quantifiable testing and tools/services are just dismissed
> with anecdotal data. I am glad NIST SATE '09 will soon be underway and,
> at least for static analysis tools,
Kevin -- excellent points. Starting on top:
+ this is happening... (really!)
+ "dynamic scanning" vendors are getting together to add/share more
data-points and lessons with:
++ WAF vendors
++ static-analysis automation vendors
++ consultants doing Pen-Testing, static analysis, threat modeling,
Arian J. Evans wrote...
> The problem I had in the past with benchmarks was the huge degree of
> customization in each application I would test. While patterns emerge
> that are almost always automatable to some degree, the technologies
> almost always require hand care-and-feeding to get them to