Re: [SC-L] informIT: Building versus Breaking

2011-09-01 Thread Arian J. Evans
Not many builders go to BlackHat. BlackHat is by Breakers, for
Defenders. It is primarily attended by Defenders, with a smaller pool
of dedicated Breakers.

It is very valuable to our industry to have conferences focused on
Breaking. Though they do have Builder and Defender talks. Some of my
first BlackHat talks were on a statistical B-A-D WAF a few of us
built, though statistical behavioral anomaly detection is boring, so
we'd drop a few zero-days on products in the talk to keep folks awake.

If you want to reach Builders: there are already dev-focused
conferences and communities for Builders. Jeremiah Grossman and I have
made a point at going to developer-focused conferences around the
world, and been well received. So, I suspect they'll allow other
security folks in too.

Michael Coates has an excellent blog post suggesting an organization
for OWASP along the above lines - and appealing to all three groups -
it would be interesting to see other security conferences explore this
structure:

http://michael-coates.blogspot.com/2011/02/vision-for-owasp.html

As for your concerns with over-emphasis on breaking

Breaking is concrete, measurable, and actionable. There are many
historical precedents for Breakers driving the innovations of
Builders.

For Example: The auto industry Builders learned substantively about
safety from the Breakers. There are many lessons in the evolution of
car safety features for us in how Breakers drive defense. From IR
(cadaver research) to Black Box (crash testing) to SAST/DAST
automation tools and test harnesses (Hybrid III and acceleration
sleds) - the evolution of car safety was instrumentally fueled, if not
driven, by the innovations of the Breakers.

It makes sense that software security will benefit from many of the
same analogues. So - it's no surprise there is so much emphasis on
breaking!

Finally - Breaking sells. It's really hard for Defenders to sell
Building Secure to business owners without concrete measurements from
Breakers. Basically, Breakers help Defenders get budget for things
like Secure Builder research and programs. And Breakers provide
measurement metrics on Builder progress.

Let's face it - Breaking is far sexier than Building. When was the
last time you saw an exciting presentation on -GS in Visual Studio?
This may be why the SCL list is smaller than the dozens of other
Breaker lists out there on the interwebs. Or it could be that the
problem is so darn hard

---
Arian Evans
Builder and Breaker


On Wed, Aug 31, 2011 at 7:16 AM, Gary McGraw g...@cigital.com wrote:
 hi sc-l,

 I went to Blackhat for the first time ever this year (even though I am 
 basically allergic to Las Vegas), and it got me started thinking about 
 building things properly versus breaking things in our field.  Blackhat was 
 mostly about breaking stuff of course.  I am not opposed to breaking stuff 
 (see Exploiting Software from 2004), but I am worried about an overemphasis 
 on breaking stuff.

 After a quick and dirty blog entry on the subject 
 http://www.cigital.com/justiceleague/2011/08/09/building-versus-breaking-a-white-hat-goes-to-blackhat/,
  I sat down and wrote a better article about it:

 Software [In]security: Balancing All the Breaking with some Building
 http://www.informit.com/articles/article.aspx?p=1750195

 I've also had a chat with Adam Shostack (a member of the newly formed 
 Blackhat Advisors) about the possibility of adding some building content to 
 Blackhat.  Go Adam!

 Do you agree that Blackhat could do with some building content??

 gem

 company www.cigital.com
 podcast www.cigital.com/silverbullet
 blog www.cigital.com/justoceleague
 book www.swsec.com

 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
 ___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


[SC-L] How do you find CSRF?

2011-04-22 Thread Arian J. Evans
Hello fellow SCLers.

Cross-Site Request Forgery (CSRF) has been generating a high volume of
questions for us in the last year, as well as noticing increased
discussions on the webappsec mailng lists. As Jeremiah noted over on
the WASC list - this is a welcome change really -- for most of the
last decade CSRF was ignored, until the Bad Peoples started exploiting
it in the wild.

To bring more clarity to the subject of CSRF - we have published a
detailed artcile describing our testing methodology and
categorizations for CSRF. We are very interested in how other
pen-testers, source code reviewers, and developers are tackling the
CSRF issue! Quite frankly most automated detection is sorely limited,
and mitigation strategies are usually subverted by the detritus of XSS
littering most applications:

WhiteHat Security’s Approach to Detecting Cross-Site Request Forgery (CSRF)

https://blog.whitehatsec.com/whitehat-security%E2%80%99s-approach-to-detecting-cross-site-request-forgery-csrf/

Due to popular demand WhiteHat launched a new blog several weeks ago.
Jeremiah Grossman, myself, and the 30-some software security engineers
who do RD in Whitehat's Threat Research Center will all be posting
their application security content as this new blog.

https://blog.whitehatsec.com/

Gary McGraw has been harping on me for years to start blogging more
about blackbox testing, practically begging me, so I finally
capitulated! This should be a resource where folks from the SCL can
learn about the scientific marriage of dynamic testing with static
analysis and secure coding initiatives!

Cheerio,

---
Arian Evans
Specialist, Strategically Scaling Software Security

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] InformIT: comparing static analysis tools

2011-02-04 Thread Arian J. Evans
That is a great question. According to Gartner, HA has the stench of
inevitability. And in general, I agree.

There are cases where dynamic and static each have clear strengths.
Pragmatic combination of of the two has promise is solving a broad
spectrum of test-cases. Additionally -HA can help improve each other
by improving context, but developing the underlying technology to make
that happen is non-trivial.

This is my guess as to how things will unfold:

Current HA attempts are at the vuln-mashup phase. Let's call this correlation.

FP reduction: the next step that folks are working on in HA is
suppression therapy. e.g- using correlation to filter and suppress
false-positives, increase signal-to-noise in output from both analysis
types.

FN reduction: HA has the promise of heatmapping coverage of both
static and dynamic testing. This would more fully allow the expert
running the solution to see what is and isn't getting covered. This
provides a better notion of False Negatives, and allow targeted tuning
and optimization. Or decide where best to focus expert human review
efforts.

Contextualization: The holy grail of HA would be to automatically have
both types of automation feed and tune each other. Black box would be
significantly enhanced by being feed framework config files, and
getting access to things like function names/parameters and objects
that are not directly exposed. This would really help dynamic on MVC
testing. Likewise, I expect dynamic testing could provide some notion
of design or control-flow back to the static engine to enhance static
authentication and authorization analysis. This would also help solve
for mobile: static could extract calls and functions from mobile
binaries, and dynamic could test the back-end web services they talk
to more effectively with that static context.

Context enhancement via HA, however, is kind of the holy grail of HA.
While it sounds great in theory, the complexity bar is high enough it
may be a long time coming.

As development shifts to more modular code on top of platforms
(iphone, xbox, rails, etc.) this is also driving interest in
lightweight solutions that can scan modular bits of code. Given that,
I think there is room for a very simplified, streamlined type of HA to
provide simple SAST that can feed a DAST unit-test type capability.
This is probably more realistic to build than the Ultimate Context
Integration Engine idea mentioned above. The more the world moves
towards coding in this manner, the more a solution like this make
sense. You would miss a lot, but it should be lightweight and actually
work.

For now though - the HA options boil down to mashups, and whether or
not suppression therapy is right for you.

We will see where it goes next...

---
Arian Evans
Software Security Scanning Snob


On Fri, Feb 4, 2011 at 2:21 PM, Prasad N Shenoy prasad.she...@gmail.com wrote:
 Yeah, clear the cloud of confusion before talking about the cloud so to
 speak. Not all SaaS offerings available today qualify to be cloud based.
 Well, this thread got morphed into a cloudy discussion. Attempting to get
 back on track, I would say IMHO, it's subjective whether the static analysis
 or dynamic analysis (pen testing/bb testing) technologies have hit the wall
 - depends on who you ask. There is some element of saturation there I
 believe else the industry (term very generously used here)won't be focusing
 on things like Hybrid Analysis. Having said that, what's the future of HA?
 Sent from my iPhone
 On Feb 4, 2011, at 12:27 PM, Ben Laurie b...@google.com wrote:



 On 4 February 2011 09:22, Chris Wysopal cwyso...@veracode.com wrote:



 “Breaking news.  Google says not to use the cloud.  Improving on-premise
 tools is the future.”

 My view is personal. However, in general, whether the cloud is a good place
 for your data depends on your data and the relationship you have with the
 cloud provider. If your boss says no, you can't push this stuff outside our
 network then clearly the cloud is not the right answer (or your boss
 doesn't understand the problem).




 Sorry, I couldn’t help myself. J



 -Chris



 From: Ben Laurie [mailto:b...@google.com]
 Sent: Friday, February 04, 2011 11:34 AM
 To: Jim Manico
 Cc: Chris Wysopal; Secure Code Mailing List
 Subject: Re: [SC-L] InformIT: comparing static analysis tools





 On 3 February 2011 16:02, Jim Manico jim.man...@owasp.org wrote:

 Chris,

 I've tried to leverage Veracode in recent engagements. Here is how the
 conversation went:

 Jim:
 Boss, can I upload all of your code to this cool SaaS service for
 analysis?

 Client:
 Uh no, and next time you ask, I'm having you committed.

 I'm sure you have faced these objections before. How do you work around
 them?



 Don't use SaaS, obviously.



 I'd rather see LLVM's static analysis tools get improved (the framework,
 btw, is really nice to work with).



 -Jim Manico
 http://manico.net

 On Feb 3, 2011, at 1:54 PM, Chris Wysopal cwyso...@veracode.com 

Re: [SC-L] InformIT: comparing static analysis tools

2011-02-03 Thread Arian J. Evans
Great article, Gary. Many of your comments about static technology
challenges I have seen and verified first-hand, including
multi-million dollar cost overruns. After some great dialogue with
John Stevens, I suspect we have had similar experiences.

I was just about to write a similar article at a higher level - about
how the vast majority of enterprise customers I work with are actively
moving security into the SDLC. The time has come, the event has
tipped, and SDLC security is indeed mainstream. This is an exciting
time to be in the industry.

However - I was curious about your comments about dynamic tools
reaching their limit or something like that, as customers move
security efforts deeper into the SDLC. What does that mean?

I see customers making extensive use of dynamic testing, and
leveraging it deeper and deeper into the SDLC. Enterprises are
aggressively rolling out and expanding dynamic testing earlier in the
SDLC. Newer dynamic testing technologies help solve/reduce some of the
key pain points that static technologies alone are causing them, as
you so well illustrated..
.
I am very interested in why you sound dismissive of these successful
technologies? Your article makes it sound like they are hitting some
invisible limit, when in fact hundreds of enterprises are expanding
dynamic testing in the SDLC. And these are serious projects that run
into the 7-figures.

Any insight you can share would be appreciated!

Great work identifying the general shift SDLC security is moving mainstream,

---
Arian Evans
Software Security Referee



On Wed, Feb 2, 2011 at 6:48 AM, Gary McGraw g...@cigital.com wrote:
 hi sc-l,

 John Steven and I recently collaborated on an article for informIT.  The 
 article is called Software [In]security: Comparing Apples, Oranges, and 
 Aardvarks (or, All Static Analysis Tools Are Not Created Equal) and is 
 available here:
 http://www.informit.com/articles/article.aspx?p=1680863

 Now that static analysis tools like Fortify and Ounce are hitting the 
 mainstream there are many potential customers who want to compare them and 
 pick the best one.  We explain why that's more difficult than it sounds at 
 first and what to watch out for as you begin to compare tools.  We did this 
 in order to get out in front of test suites that purport to work for tool 
 comparison.  If you wonder why such suites may not work as advertised, read 
 the article.

 Your feedback is welcome.

 gem

 company www.cigital.com
 podcast www.cigital.com/silverbullet
 blog www.cigital.com/justiceleague
 book www.swsec.com

 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
 ___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] [WEB SECURITY] Backdoors in custom software applications

2010-12-23 Thread Arian J. Evans
Sebastian -

Looks like you got great replies! Lots of different theories and ideas here.

On a day to day basis - here are the most common backdoors in
webapps I've encountered over the last 15 years or so:

1) Developer Tools Backdoor hidden under obscure path
2) COTS module improperly deployed results in backdoor
3) Custom admin module, Auth gets changed/removed, results in same as #2
4) MVC framework autobinding exposes functions not intended to be
exposed resulting in backdoor

Most of these backdoors are accidental ignorance or mistakes. They
can turn malicious, but in the majority of cases I have seen, they
were not intended to be malicious. I have only seen deep-evil,
malicious backdooring a couple of times.

In devilish detail:

1) Back Door hidden under obscure path

/app/bin/steve/stevessecretphrase/tools/ (stuff under here is
dangerous and/or bad)

I see this happen over and over again for a variety of reasons. As
mentioned - most were not intentionally malicious - at least before
the developer who made the backdoor was fired. Usually the original
motivation was to instrument some part of the code for runtime
analysis, or provide a test/debugging interface for the developer.

Automated static analysis is by and large useless out of the box at
detecting these. These are valid web applications from an
implementation level perspective. Blind automated blackbox is fairly
limited at finding these toothese type of backdoors aren't linked
from the main app. They are rarely in the large, fairly useless
dictionary of directory names the scanners run brute force checks for.

How do you find these?

I wrote a tripwire-like tool for my webapps that tracked and diffed
files and paths for (a) changes and (b) cross mapped to request paths
in the WWW logs. In the early days I knew all the paths of all the web
apps we wrote, so this would allow me to identify new and unusual
paths as they showed up on the file system, or in request logs. As we
grew that quickly failed to scale. When your web apps grow into the
dozens, and hundredsI think a WAF is your only hope here.

Modern web apps don't lend themselves to file-level
path/file/directory audits (using automation). Source code scanning
lacks the context needed here, plus these things are very often
config-file dependent and config files in prod are always different
than the CBE/SIT code being scanned, and in prod they are rarely
audited properly.

The good news is that the bad guys have just as much trouble finding
these as you do. The exception being an ex-employee who has insider
knowledge (usually wrote the thing). But while dangerous, this appears
to be one of the least-common sources of compromise.



2) COTS module improperly deployed or configured for AuthC/Z resulting
in Back Door

Example: some Peoplesoft or SAP module with employee PII or amazing
administrative powers accidentally gets:

(a) deployed with insufficient (or missing) AuthC/Z, usually as part
of some grand web SSO scheme that turned messy.

(b) deployed to an unintended production server region that does not
have the same controls as the intended region. e.g.- IT is counting on
using Windows Integrated Authentication over HTTP on the Intranet for
Auth on this webapp. However, someone deployed parts of it to the
Internet-facing/DMZ webservers. Now you can access it with no
authentication at all. Or using Basic Auth and a default vendor or
admin/admin type account easily accessible over the internet.

At the rate I see #2 increasing, it may replace #1 soon.



3) Homegrown admin tools deployed with insufficient AuthC/Z

Same as #2, but harder to find with automation. It's easy to cook up
some tests for things everyone knows about, e.g.
/peoplesoft/admintools/ in static and dynamic analysis.

It is less easy to look for things you do not know exist.
/sebastians/homebrew/admintools

I see two backdoor situations here, one where /admintools/
accidentally has Auth removed for everything in it, and I see where
only a specific function/form fails to Auth, allowing you to create a
new Administrator, or such, if you know the right request to make.



4) MVC framework Autobinding exposes parameters in namespace not
intended to be exposed in the UI at all, or at the privilege level
they get exposed at.

This is a newer, but rapidly growing backdoor problem. I do not see
this being discussed much yet in Web AppSec circles, so either I'm
missing something or other folks don't realize the potential impact of
this problem.

There are not many vendors providing effective solutions for
discovering MVC backdoors. I suspect because there is no demand for
this year. Operational security people driving software security
analysis efforts are otten not aware of the slightly messy features in
MVC frameworks (like autobinding). And quite frankly, I'm not sure how
many of the software security automation vendors get it. But I
digress.

To test for these issues I like to extract a parameter heatmap from

Re: [SC-L] [WEB SECURITY] Re: What do you like better Web penetration testing or static code analysis?

2010-04-27 Thread Arian J. Evans
So - Just to make sure I understand - You are saying you don't
actually perform all of these activities for clients to help them
secure their web software today?

I think that will be a relief to some on the list. I know a few people
called me concerned that they were never going to have time to sleep
again with all that to do!

Overall you do make interesting points with your ideas. I definitely
agree with your assertion that automation alone has significant
limitations.

This is definitely the right forum to bounce around your ideas about
what types of security/secure/coding/analysis activities may work,
what activities we might want to try out, and what the best books to
read are, to help us figure out how to secure the bazillions of web
applications that exist today.

Ciao,

---
Arian Evans



On Tue, Apr 27, 2010 at 12:52 PM, Andre Gironda and...@gmail.com wrote:
 On Tue, Apr 27, 2010 at 11:52 AM, Arian J. Evans
 arian.ev...@anachronic.com wrote:
 So to be clear -

 You are saying that you do all of the below when you are analyzing
 hundreds to thousands of websites to help your customers identify
 weaknesses that hackers could exploit?

 How do you find the time?

 Not me personally, but the industry as a whole does provide most of
 these types of coverage. Everyone sees it a different way, and
 probably everybody is right.

 What I do find incorrect and wrong is assuming that you can automate
 anything (especially risk and vulnerability decisions -- is this a
 vuln; is this a risk).

 What I also find wrong is that the tools which attempt to automate
 finding vulnerabilities and assigning risk (but can't deliver on
 either) cost $60k/assessor for a code scan or $2k/app for a runtime
 scan.

 A team (doesn't have to be security people, but should probably
 include at least one) should instead use a free tool such as
 Netsparker Community Edition, crawl a target app through a few proxies
 (a few crawl runs) such as Burp Suite Professional, Casaba Watcher,
 and Google Ratproxy -- do a few other things such as track actions a
 browser would take (in XPath expression format) and plot a graph of
 dynamic-page/HTML/CSS/JS/SWF/form/parameter/etc objects (to show the
 complexity of the target app) -- and provide a data corpus (not just a
 database or list of findings) to allow the reviewers to make more
 informed decisions about what has been testing, what should be tested,
 and what will provide the most value. Combine the results with
 FindBugs, CAT.NET, VS PREfix/PREfast, cppcheck, or other static
 analysis-like reports in order to generate more value in making
 informed decisions. Perhaps cross-correlate and maps URLS to source
 code.

 I call this Peripheral Security Testing. Then, if time allows (or
 the risk to the target app is seen as great enough), add in a
 threat-model and allow the team to perform penetration-testing based
 on those informed decisions. I call this latter part, Adversarial
 Security Testing.

 Does manual testing take more time, or does it instead find the right
 things and allow the team to make almost-fully informed decisions,
 thus saving time? I will leave that as a straw man argument that you
 can all debate.

 dre

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] What do you like better Web penetration testing or static code analysis?

2010-04-25 Thread Arian J. Evans
The world of web software is the future and the future is a wild
open-ended place by design. I for one would like to keep it that way.

You guys that write a lot of ideological software SDL-theory books can
keep your dinosaur Multics.

About 4 years ago I shifted my focus away from static analysis to
focus entirely on black box testing, not out of sexiness or
fun-factor, but because:


1) Software security needs to become operationalized, independent of
any discussions about SDL.

I think in many cases you need SDL and operational controls, but at a
minimum you need operational controls. I believe automated BB analysis
is one of the only ways to accomplish operationalizing software
security. I'm working with almost 1800 websites right now which
requires me to look at the problem intimately at full scale.


2) Secondly I think BB helps answer the question What are my top
trivially-exploitable threats in my web software? reasonably well. BB
alone can provide better answers to this question than many forms of
static analysis alone can provide.

Sorry, I know a lot of you on this list disagree with #2, but you're
wrong. Real-world compromises to web software completely validate #2.

Luckily, you don't have to chose one versus the other.

BB and static analysis fit together hand in glove, and obviously some
of us on this list are working to explore the best marriage of the
two. I think we will be able to really dial in the efficiency of
analysis efforts once we have a clearer understanding of where BB and
static overlap, and where they don't.

Just to be clear- there are absolutely many times doing BB-only
analysis where I would love to have access to source.

Static analysis can be so much faster and more effective at verifing
certain classes of defects in source (mostly syntax
injection/manipulation issues, and some plain vanilla auth issues),
especially once you already know what you are looking for. Which again
speaks to the value of hybrid solutions.

---

Finally - to add to Kevin's point - I have seen situations on where
developers respond much more positively to talking to someone who can
read and explain their code. In fact, I just spent this week on the
road having to talk to developers and look at source code for this
very reason. (Their lack of understanding of BB analysis/results.)

Developers really like to talk to other developers, and not some
hotshot pen-tester with crazy hair and earrings that is full of Way
Cool Exploits and has no real idea how software development happens.

Unfortunately, in the information-security industry, there are far
more Way Cool Exploit Dudes than there are professionals with a
background in software development. It is what it is.

Human_Hand_Driven(BB + Static + Developer Interpretation)==Great Combo Answer

---
Arian Evans



On Fri, Apr 23, 2010 at 11:08 AM, Brian Chess br...@fortify.com wrote:
 I like your point Matt.  Everybody who's responded thus-far has wanted to
 turn this into a discussion about what's most effective or what has the most
 benefit, sort of like we were comparing which icky medicine to take or which
 overcooked vegetable to eat.  Maybe they don't get any pleasure from the
 work itself.

 It sounds as though you need to change up your static analysis style.  A few
 years back we ran competitions at BlackHat where we found we could identify
 and exploit vulnerabilities starting from static analysis just as quickly as
 from fuzzing.  Here¹s an overview:

 http://reddevnews.com/Blogs/Desmond-File/2008/08/Iron-Chef-Competition-at-Bl
 ack-Hat-Cooks-Up-Security-Goodness.aspx

 Interviews with Charlie Miller and Sean Fay:
 http://blog.fortify.com/blog/2009/05/02/Iron-Chef-Interviews-Part-1-Charlie-
 Miller-1-2
 http://blog.fortify.com/blog/2009/05/02/Iron-Chef-Interviews-Part-2-Sean-Fay

 Brian

 On 4/23/10 7:05 AM, Matt Parsons mparsons1...@gmail.com wrote:

 Gary,
 I was not stating which was better for security.  I was stating what I
 thought was more fun.   I feel that penetration testing is sexier.  I find
 penetration testing like driving a Ferrari and static code analysis like
 driving a Ford Taurus.   I believe with everyone else on this list that
 software security needs to be integrated early in the development life
 cycle.  I have also read most of your books and agree with your findings.
 As you would say I don't think that penetration testing is magic security
 pixie dust but it is fun when you are doing it legally and ethically.  My
 two cents.
 Matt


 Matt Parsons, MSM, CISSP
 315-559-3588 Blackberry
 817-294-3789 Home office
 Do Good and Fear No Man
 Fort Worth, Texas
 A.K.A The Keyboard Cowboy
 mailto:mparsons1...@gmail.com
 http://www.parsonsisconsulting.com
 http://www.o2-ounceopen.com/o2-power-users/
 http://www.linkedin.com/in/parsonsconsulting
 http://parsonsisconsulting.blogspot.com/
 http://www.vimeo.com/8939668
 http://twitter.com/parsonsmatt












 -Original Message-
 From: sc-l-boun...@securecoding.org 

Re: [SC-L] [WEB SECURITY] RE: How to stop hackers at the root cause

2010-04-14 Thread Arian J. Evans
Keyboard Cowboy,

Education is always a good thing. I think kids should have the opportunity
to learn both sides of software security. Great suggestion.

Kids, by nature, are drawn to things that are taboo and demonized. Which
hacking no doubt falls into, and according to Daniel, also Angelina Jolie.

We can find great analogies to the hacker kids problem in recent studies
done on teenage behaviors:

The Bible Belt, particularly evangelicals in the south, have the highest
rates of teen sex and pregnancy in the US. Telling kids to abstain clearly
doesn't work as well as teaching them how things work, and in particular
careful education surrounding the use of safety devices. To the exact point
you made in your blog.

We see the exact same statistics surrounding firearm safety and education
(in the US, again). Children (and adults) exposed to firearm safety and
education rarely fall into firearm-accident statistics. Studies indicate
that it is the kids we hide things from, that want to pull the trigger to
see what happens when they discover the [taboo].

In locations where children have open and honest instruction, and are
provided with viable outlets for their firearms (say, condoms) we find
discharge accident rates to be lower per-capita. Again - the same point your
blog post was making.

---

The Bad Peoples:

None of this does anything to solve the Bad People hacking problem. That
solution requires Guns or Religion, which is far off topic for this list.

As Daniel pointed out - there's also a huge problem in webappsec with *poor
people*. So, I think Daniel has some ideas for dealing with them too, but I,
the reader, am not sure I understand what he is suggesting. When he comes
back through the door maybe we'll learn more.

Definitely an exciting subject!

---
Arian Evans
Solipsistic Software Security Sophist


On Tue, Apr 13, 2010 at 6:33 AM, Daniel Herrera daherrera...@yahoo.comwrote:

  DARE didn't stop youth drug use,
 Sex Ed didn't stop teen pregnancy rates,
 Why would your program stop/reduce script kiddies... j/k

 In all seriousness I think your perspective on the cost/benefit is really
 skewed on this one.

 Attacks against US assets are a method of revenue generation in several
 impoverished areas around the world. Places where the infrastructure would
 have very little means to even begin implementing a program like you
 described without serious financial aid. And once such a system was put in
 place the financial drive would still push people to participate in this
 behavior to feed their families, pay their rent, etc.

 In the end I would try to think about the drivers behind malicious behavior
 a lot more closely. Sure there are examples were hacking has been
 romanticized in the past within our society but not enough for some kid to
 watch the movie HACKERS and then decide to go after his grandmothers
 credit card because then he would get to date Angelina Jolie. (well other
 than me)

 I wrote this on my way out the door so my point is in there some where but
 probably should go through some back and forth to get cleared up let me know
 if you, the reader, disagrees.

 Regards,


 Daniel

 --- On *Mon, 4/12/10, Matt Parsons mparsons1...@gmail.com* wrote:


 From: Matt Parsons mparsons1...@gmail.com
 Subject: [WEB SECURITY] RE: How to stop hackers at the root cause
 To: 'Matt Parsons' mparsons1...@gmail.com, SC-L@securecoding.org
 Cc: owaspdal...@utdallas.edu, 'Webappsec Group' 
 websecur...@webappsec.org, webapp...@securityfocus.com
 Date: Monday, April 12, 2010, 9:51 PM


  I have published a blog post on how I think we could potentially stop
 hackers in the next generation.  Please let me know what you think of it or
 if it has been done before.



 http://parsonsisconsulting.blogspot.com/







 Matt Parsons, MSM, CISSP

 315-559-3588 Blackberry

 817-294-3789 Home office

 Do Good and Fear No Man

 Fort Worth, Texas

 A.K.A The Keyboard Cowboy

 mailto:mparsons1...@gmail.comhttp://mc/compose?to=mparsons1...@gmail.com

 http://www.parsonsisconsulting.com

 http://www.o2-ounceopen.com/o2-power-users/

 http://www.linkedin.com/in/parsonsconsulting

 http://parsonsisconsulting.blogspot.com/

 http://www.vimeo.com/8939668



 [image: 0_0_0_0_250_281_csupload_6117291]



 [image: untitled]

















___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] [WEB SECURITY] Re: [owaspdallas] Re: [WEB SECURITY] RE: How to stop hackers at the root cause

2010-04-14 Thread Arian J. Evans
You are absolutely right Paul. The problems with ignorance and
abstinence-based approaches to child education extend out well beyond
the Bible Belt, and can be found all over the US. I should have cast a
wider net. Also, great job at ruining a good laugh.

http://aspe.hhs.gov/hsp/abstinence07/
http://www.washingtonpost.com/wp-dyn/content/article/2009/03/18/AR2009031801597.html?hpid=topnewssub=AR
http://www.salon.com/life/broadsheet/feature/2009/03/19/teen_birthrate/index.html
http://dir.salon.com/topics/sex_education/

The point here is that while education is valuable -- *comprehensive*
education is even more valuable.

This is a loaded subject and people with belief-system drivers can get
quite passionate about it. I'm not interested in a passionate
discussion about this subject.

I think the thread will turn into the tarpit of insanity if it goes
further so I suggest we be done,

---
Arian Evans



On Wed, Apr 14, 2010 at 10:29 AM, Paul Schmehl pschmehl_li...@tx.rr.com wrote:
 --On Tuesday, April 13, 2010 15:21:26 -0700 Arian J. Evans
 arian.ev...@anachronic.com wrote:

 Keyboard Cowboy,

 Education is always a good thing. I think kids should have the opportunity
 to
 learn both sides of software security. Great suggestion.

 Kids, by nature, are drawn to things that are taboo and demonized. Which
 hacking no doubt falls into, and according to Daniel, also Angelina Jolie.

 We can find great analogies to the hacker kids problem in recent studies
 done on teenage behaviors:

 The Bible Belt, particularly evangelicals in the south, have the highest
 rates of teen sex and pregnancy in the US. Telling kids to abstain
 clearly
 doesn't work as well as teaching them how things work, and in particular
 careful education surrounding the use of safety devices. To the exact
 point
 you made in your blog.

 This is totally off topic, but I simply cannot let this slide.  People like
 to throw out canards like this as if they are facts, and seldom are they
 ever questioned.

 First of all, your assertion isn't borne out by the data.  Secondly, you've
 not cited a single study to back up your assertion, in particular the claim
 that the lack of sex education (which you assume occurs due to religious
 objections) is responsible for the claimed, but not factual, higher
 pregnancy rates.

 According to a study done by the Guttmacher Institute in 2000 [1] (The
 Guttmacher Institution is a pro-choice group that advocates for sex
 education), here are the state rankings by rates of pregnancy and rates of
 abortion

 1) Nevada                      4
 2) Arizona                    19
 3) Mississippi                28
 4) New Mexico              18
 5) Texas                      26
 6) Florida                      7
 7) California                  5
 8) Georgia                   22
 9) North Carolina         17
 10) Arkansas               41
 11) Delaware                8
 12) Hawaii                    6

 Of the top twelve states, only half are what could be considered Bible Belt
 states, so I think you have to look elsewhere for your explanation of teen
 pregnancy rates.  OTOH, it's pretty clear the Bible Belt states are
 significantly less likely to abort a teen pregnancy, which may or may not be
 an indicator of religious influence.  (I'm not prepared to say it is without
 data to support it.)

 About.com also has statistics about teen birth rates [2], and their
 statistics don't bear out your assertion either.  Their stats are based on
 the 2006 Guttmacher Institute report, and the rankings have changed very
 little.

 States ranked by rates of pregnancy among women age 15-19 (pregnancies per
 thousand):

  1. Nevada (113)
  2. Arizona (104)
  3. Mississippi (103)
  4. New Mexico (103)
  5. Texas (101)
  6. Florida (97)
  7. California (96)
  8. Georgia (95)
  9. North Carolina (95)
  10. Arkansas (93)

 States ranked by rates of live births among women age 15-19 (births per
 thousand):

  1. Mississippi (71)
  2. Texas (69)
  3. Arizona (67)
  4. Arkansas (66)
  5. New Mexico (66)
  6. Georgia (63)
  7. Louisiana (62)
  8. Nevada (61)
  9. Alabama (61)
  10. Oklahoma (60)

 Again, the so-called Bible Belt doesn't demonstrate a propensity to get
 pregnant at any higher rates than other parts of the country but clearly
 bears those children to term at a higher rate than other areas.

 Furthermore, the most recent statistics from the government [3], while they
 do show a change in the rankings, still do not bear out your assertion that
 the Bible Belt, particularly evangelicals in the south, have the highest
 teen pregnancy rates.  As I've shown birth rates do not equal pregnancy
 rates.  You have to factor in abortions as well.

 You may well have been misled by MSNBC [4] (but then who hasn't been misled
 by MSNBC), because they recently reported a study that found a correlation
 between the Bible Belt and birth rates, but that study doesn't address
 pregnancy or abortion, so it's misleading.  The study also appears

Re: [SC-L] Metrics

2010-02-05 Thread Arian J. Evans
In the web security world it doesn't seem to matter much. Top(n) Lists
are Top(n).

There is much ideological disagreement over what goes in those lists
and why, but the ratios of defects are fairly consistent. Both with
managed code and with scripting languages.

The WhiteHat Security statistics report provides some interesting
insights into this, particularly the last one. It's one of the only
public stats reports out there for webappsec that I know of.

I have observed what I've thought to be differences anecdotally, but
when we crunch the numbers on a large scale, they average out and
issue ratios are fairly consistent. Which shows you the dangerous
power of anecdotes, and statistically small samples, to be misleading.

---
Arian Evans
Software Security Statistician


On Fri, Feb 5, 2010 at 7:07 AM, McGovern, James F. (eBusiness)
james.mcgov...@thehartford.com wrote:
 Here's an example.  In the BSIMM,  10 of 30 firms have built top-N bug
 lists based on their own data culled from their own code.  I would
 love to see how those top-n lists compare to the  OWASP top ten or the
 CWE-25.  I would also love to see whether the union of these lists is
even remotely interesting.

 One of the general patterns I noted while providing feedback to the
 OWASP Top Ten listserv is that top ten lists do sort differently. Within
 an enterprise setting, it is typical for enterprise applications to be
 built on Java, .NET or other compiled languages where as if I were doing
 an Internet startup I may leverage more scripting approaches. So, if
 different demographics have different behaviors what would a converged
 list or even a separate list tell us?

 
 This communication, including attachments, is for the exclusive use of 
 addressee and may contain proprietary, confidential and/or privileged 
 information.  If you are not the intended recipient, any use, copying, 
 disclosure, dissemination or distribution is strictly prohibited.  If you are 
 not the intended recipient, please notify the sender immediately by return 
 e-mail, delete this communication and destroy all copies.
 


 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Arian J. Evans
.


1. Bug Parades are great. They can include design flaws and such as
well. (Don't need a semantic debate about bug vs. flaw, please; we all
get it.)

It's time to refine our bug parades with more real world data and make
sure they reflect people's needs. If flawed design patterns need to be
in there, they can be.


2. Top25 has a very valuable place. It gets things done. It moves the
bar. It gets seatbelts installed.


3. Black Box testing is super valuable. It gives you a run-time
measuring stick to evaluate what works and what doesn't. Is developer
training working? Is your WAF working? Is your source code scanning
properly including the right libraries?

Not to mention BB gives you an immediate, essential attack surface you
need to know. And yes, I mean Need to Know.


4. Pen Testing is very valuable. It tells you what you absolutely,
positively, need to care about, at a minimum. There are many reasons
pen testing is valuable, and still sought-after, but that's for a
longer discussion.

I've been picking up Marcus Ranum detritus for over a decade by
helping people confused with his rants about how penetrate and patch
doesn't work, and how we need to start from the ground up and build
secure networks and write secure code. Maybe Ranum sandbagged you
with one of his rants and it stuck?

Anyway -- I can help you out here if you want to discuss further.


5. What world of science do you live in?

Modern science is driven by statistics. I provide my customers math
and stats, and constantly work to improve this. I think, in fact, we
provide more stats than anyone on the planet today in the field of
webappsec.

The fundamental premise of science is that a hypothesis becomes a
theory when you have tests that can be performed by two or more
people, and publicly verified. So we're doing all of that above.
That's definitely science.

I'm not sure where the It's science time comes in. Is there a dance
that goes with that?



 We'll just ignore the Nader  Feynman stuff.

I did not say Nader  Feynman.

I said Nader fundamentally improved society by changing business SOP
and promoting safety controls that affected millions, through use of
bug parades and Top(n) lists, and awareness campaigns.

Feynman, not so much.

I'm not sure what your goal is.

Advance SoA: If it's to advance the state-of-the-art in software
security, then BSIMM may be a worthy, lofty goal, and rambling about
Feynman and science may be related.


Improve Immediate Quality: If it's to pragmatically improve the
quality of software security, then that's a different thing.

We could do some basic work on improving quantitative vs. qualitative
metrics definitions in software security, and improve focus on finding
out what is really attacked, and what attack surface is most
immediately at risk of compromise, and move the bar a meaningful
amount.

I guess I'm just not a fan of huge GW Bush style programs where you
mobilize a special task force and invade another country to count WMDs
before you can identify that you have a basic problem and take steps
to solve it. I'm not even sure big programs like that improve
security.

I think a couple of guns in a couple of hands of a couple of pilots
and, wow...we might not be in Afghanistan or Iraq.

I tend to lean more towards pragmatic solutions like that in software
security. I know most executives I deal with seem to lean in a similar
fashion.

I hear ESAPI makes a good gun these days. Whadda they call that thing?
ESAPI(waf)?

---
Arian J. Evans
When a strong man, fully armed, guards his own homestead, his
possessions are undisturbed. Luke 11:21

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-02 Thread Arian J. Evans
100% agree with the first half of your response, Kevin. Here's what
people ask and need:


Strategic folks (VP, CxO) most frequently ask:

+ What do I do next? / What should we focus on next? (prescriptive)

+ How do we tell if we are reducing risk? (prescriptive guidance again)

Initially they ask for descriptive information, but once they get
going they need strategic prescriptions.


Tactical folks tend to ask:

+ What should we fix first? (prescriptive)

+ What steps can I take to reduce XSS attack surface by 80%? (yes, a
prescriptive blacklist can work here)


 Implementation level folks ask:

+ What do I do about this specific attack/weakness?

+ How do I make my compensating control (WAF, IPS) block this specific attack?

etc.

BSIMM is probably useful for government agencies, or some large
organizations. But the vast majority of clients I work with don't have
the time or need or ability to take advantage of BSIMM. Nor should
they. They don't need a software security group.

They need a clear-cut tree of prescriptive guidelines that work in a
measurable fashion. I agree and strongly empathize with Gary on many
premises of his article - including that not many folks have metrics,
and tend to have more faith and magic.

But, as should be no surprise, I cateogrically disagree with the
entire concluding paragraph of the article. Sadly it's just more faith
and magic from Gary's end. We all can do better than that.

There are other ways to gather and measure useful metrics easily
without BSIMM. Black Box and Pen Test metrics, and Top(n) List metrics
are metrics, and highly useful metrics. And definitely better than no
metrics.

Pragmatically, I think Ralph Nader fits better than Feynman for this discussion.

Nader's Top(n) lists and Bug Parades earned us many safer-society
(cars, water, etc.) features over the last five decades.

Feynman didn't change much in terms of business SOP.

Good day then,

---
Arian Evans
capitalist marksman. eats animals.



On Tue, Feb 2, 2010 at 9:30 AM, Wall, Kevin kevin.w...@qwest.com wrote:
 On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote:

 Among other things, David [Rice] and I discussed the difference between
 descriptive models like BSIMM and prescriptive models which purport to
 tell you what you should do.  I just wrote an article about that for
 informIT.  The title is

 Cargo Cult Computer Security: Why we need more description and less
 prescription.
 http://www.informit.com/articles/article.aspx?p=1562220

 First, let me say that I have been the team lead of a small Software
 Security Group (specifically, an Application Security team) at a
 large telecom company for the past 11 years, so I am writing this from
 an SSG practitioner's perspective.

 Second, let me say that I appreciate descriptive holistic approaches to
 security such as BSIMM and OWASP's OpenSAMM. I think they are much
 needed, though seldom heeded.

 Which brings me to my third point. In my 11 years of experience working
 on this SSG, it is very rare that application development teams are
 looking for a _descriptive_ approach. Almost always, they are
 looking for a _prescriptive_ one. They want specific solutions
 to specific problems, not some general formula to an approach that will
 make them more secure. To those application development teams, something
 like OWASP's ESAPI is much more valuable than something like BSIMM or
 OpenSAMM. In fact, I you confirm that you BSIMM research would indicate that
 many companies' SSGs have developed their own proprietary security APIs
 for use by their application development teams. Therefore, to that end,
 I would not say we need less _prescriptive_ and more _descriptive_
 approaches. Both are useful and ideally should go together like hand and
 glove. (To that end, I also ask that you overlook some of my somewhat
 overzealous ESAPI developer colleagues who in the past made claims that
 ESAPI was the greatest thing since sliced beer. While I am an ardent
 ESAPI supporter and contributor, I proclaim it will *NOT* solve our pandemic
 security issues alone, nor for the record will it solve world hunger. ;-)

 I suspect that this apparent dichotomy in our perception of the
 usefulness of the prescriptive vs. descriptive approaches is explained
 in part by the different audiences with whom we associate. Hang out with
 VPs, CSOs, and executive directors and they likely are looking for advice on
 an SSDLC or broad direction to cover their specifically identified
 security gaps. However, in the trenches--where my team works--they want
 specifics. They ask us How can you help us to eliminate our specific
 XSS or CSRF issues?, Can you provide us with a secure SSO solution
 that is compliant with both corporate information security policies and
 regulatory compliance?, etc. If our SSG were to hand them something like
 BSIMM, they would come away telling their management that we didn't help
 them at all.

 This brings me to my fourth, and likely most 

Re: [SC-L] Blog skiiers versus snowboarders CISSPs vs programmers

2010-01-13 Thread Arian J. Evans
The software security problem is a huge problem. There are not enough
CISSPs to even think about solving this problem.

CISSPs probably should have a tactical role helping categorize,
classify, and facilitate getting things done. Scanner jockeys and
network security folk will lead the operational charge to WAF and
block and such. (good or bad, you're gonna need this stuff, the
problem is just too darn big)

I don't think many good devs who enjoy building are going to want to
change careers to do source code audits. That gets mind numbing
awfully fast.

Developers definitely have a role to play in solving a lot of the
basic syntax-attack stuffs, by proper selection and application of
modern frameworks, technologies, and gap-APIs (like ESAPI). Most
CISSPs lack the skill to provide much value here.

Design issues will always exist, unless users some day wake up and
decide they prefer security over usability. But I don't see that
happening any time soon. Heck, my password on all my work machines is
password.

$0.02 USD.

---
Arian Evans
capitalist marksman. eats animals.



On Tue, Jan 12, 2010 at 8:44 AM, Matt Parsons mparsons1...@gmail.com wrote:
 I wrote a blog in the state of software security using the analogy of skiers
 versus snowboarder in the early 90's.

 Please let me know your thoughts and comments by replying to this list or my
 blog.

 http://parsonsisconsulting.blogspot.com/



 Thanks,
 Matt



 Matt Parsons, MSM, CISSP
 315-559-3588 Blackberry
 817-294-3789 Home office
 mailto:mparsons1...@gmail.com
 http://www.parsonsisconsulting.com
 http://www.o2-ounceopen.com/o2-power-users/
 http://www.linkedin.com/in/parsonsconsulting
 http://parsonsisconsulting.blogspot.com/





 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] embedded systems security analysis

2009-08-20 Thread Arian J. Evans
Rafael -- to clarify concretely:

There are quite a few researchers that attack/exploit embedded
systems. Some google searches will probably provide you with names.

None of the folks I know of that actively work on exploiting embedded
systems are on this listbut I figure if I know a handful of them
in my small circle of software security folks - there have to be many
more out there.

Assuming you are safe is not just a dangerous assumption: but wrong.

Specifically -

One researcher I know pulls boards  system components apart and finds
out who the source IC and component makers are.

Then they contact the component and IC makers and pretends to be the
board or system vendor who purchased the components, and asks for
documentation, debuggers, magic access codes hidden in firmware (if he
cannot reverse them).

If this fails, the researcher has also befriended people at companies
who do work with the IC or board maker, traded them information, in
exchange for debuggers and the like.

This particular researcher does not publish any of their research in
this area. They do it mainly (I think) to help build better tools and
as a hobby. (Several of you on this list probably know exactly whom
I'm talking about. This person would prefer privacy, and I think the
person's employer demands it, unless you get him in person and feed
him enough beer.)

If I were a bettin' man I'd figure if I know a few person doing this
type of thing for quite a few years now -- there are bound to be many,
many more

Not sure what list to go to for talks on that type of thing.
Blackhat.com has some older presentations on this subject.

-- 
Arian Evans



On Wed, Aug 19, 2009 at 8:36 AM, Rafael Ruizrafael.r...@navico.com wrote:
 Hi people,

 I am a lurker (I think), I am an embedded programmer and work at
 Lowrance (a brand of the Navico company), and I don't think I can't
 provide too much to security because embedded software is closed per se.
 Or maybe I am wrong, is there a way to grab the source code from an
 electronic equipment? That would be the only concern for embedded
 programmers like me, but I just like to learn about the thinks you talk.

 Thank you.

 Greetings from Mexico.

 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] IBM Acquires Ounce Labs, Inc.

2009-08-06 Thread Arian J. Evans
 than either. NTO reps, feel free to spam me (me, 
 not the list).

 I will say this: Chris I'm completely with you in that I'm convinced that the 
 majority of the market buying scanners is not doing so based on any objective 
 empirical testing, but rather on who found what or what they like.  I'm 
 even saddened to say that I recently saw a presentation by an organization 
 tasked and paid to perform objective empirical analysis of scanners, that 
 literally ranked them based on what they found, with absolutely no testing 
 ground truth.

 I'm even more strongly convinced that the majority of those running these 
 tools completely underestimate the expertise required to properly operate 
 them and realize full potential from them.  Given the complexity of testing 
 software these days you still really need to know what you're doing to eak 
 out of them what little value they hold. Even with realizing their full 
 potential, however, there's still a lot of work to be done beyond a scan to 
 perform anything resembling a complete assessment.  Of course, a human 
 assisted SaaS model has the potential to fill the gap, but from what I'm the 
 majority of organizations using scanners like WI and AS in-house don't. Heck, 
 even some really big name firms selling rather expensive fancily marketed 
 assessments don't.

 Shame, really.

 -Matt.


 -Original Message-
 From: Chris Wysopal [mailto:cwyso...@veracode.com]
 Sent: Tuesday, August 04, 2009 8:54 PM
 To: Arian J. Evans; Matt Fisher
 Cc: Kenneth Van Wyk; Secure Coding
 Subject: RE: [SC-L] IBM Acquires Ounce Labs, Inc.


 I wouldn't say that NTO Spider is a sort of dynamic web scanner. It is a 
 top tier scanner that can battle head to head on false negative rate with the 
 big conglomerates' scanners: IBM AppScan and HP WebInspect.  Larry Suto 
 published an analysis a year ago, that certainly had some flaws (and was 
 rightly criticized), but genuinely showed all three to be in the same league. 
 I haven't seen a better head-to-head analysis conducted by anyone. A little 
 bird whispered to me that we may see a new analysis by someone soon.

 As a group of security practitioners it is amazing to me that we don't have 
 more quantifiable testing and tools/services are just dismissed with 
 anecdotal data.  I am glad NIST SATE '09 will soon be underway and, at least 
 for static analysis tools, we will have unbiased independent testing. I am 
 hoping for a big improvement over last year.  I especially like the category 
 they are using for some flaws found as valid but insignificant. Clearly 
 they are improving based on feedback from SATE '08.

 Veracode was the first company to offer static and dynamic (web) analysis, 
 and we have been for 2 years (announced Aug 8, 2007).  We deliver it as a 
 service. If you have a .NET or Java web app, you would cannot find a 
 comparable solution form a single vendor today.

 -Chris

 -Original Message-
 From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On 
 Behalf Of Arian J. Evans
 Sent: Tuesday, July 28, 2009 1:41 PM
 To: Matt Fisher
 Cc: Kenneth Van Wyk; Secure Coding
 Subject: Re: [SC-L] IBM Acquires Ounce Labs, Inc.

 Right now, officially, I think that is about it. IBM, Veracode, and
 AoD (in Germany) claims they have this too.

 As Mattyson mentioned, Veracode only does static binary analysis (no
 source analysis). They offer dynamic scanning but I believe it is
 using NTO Spider IIRC which is a simplified scanner that targets
 unskilled users last I saw it.

 At one point I believe Veracode was in discussions with SPI to use WI,
 but since the Veracoders haunt this list I'll let them clarify what
 they use if they want.

 So IBM: soon.

 Veracode: sort-of.

 AoD: on paper

 And more to come in short order no doubt. I think we all knew this was
 coming sooner or later. Just a matter of when.

 The big guys have a lot of bucks to throw at this problem if they want
 to, and pull off some really nice integrations. Be interesting to see
 what they do, and how useful the integrations really are to
 organizations.

 --
 Arian Evans





 On Tue, Jul 28, 2009 at 9:29 AM, Matt Fisherm...@piscis-security.com wrote:
 Pretty much. Hp /spi has integrations as well but I don't recall devinspect 
 ever being a big hit.  Veracode does both as well as static binary but as 
 asaas model. Watchfire had a RAD integration as well iirc but it clearly 
 must not haved had the share ounce does.

 -Original Message-
 From: Prasad Shenoy prasad.she...@gmail.com
 Sent: July 28, 2009 12:22 PM
 To: Kenneth Van Wyk k...@krvw.com
 Cc: Secure Coding SC-L@securecoding.org
 Subject: Re: [SC-L] IBM Acquires Ounce Labs, Inc.


 Wow indeed. Does that makes IBM the only vendor to offer both Static
 and Dynamic software security testing/analysis capabilities?

 Thanks  Regards,
 Prasad N. Shenoy

 On Tue, Jul 28, 2009 at 10:19 AM, Kenneth Van Wykk...@krvw.com wrote:
 Wow, big acquisition news

Re: [SC-L] IBM Acquires Ounce Labs, Inc.

2009-08-04 Thread Arian J. Evans
 
 they are improving based on feedback from SATE '08.

 Veracode was the first company to offer static and dynamic (web) analysis, 
 and we have been for 2 years (announced Aug 8, 2007).  We deliver it as a 
 service. If you have a .NET or Java web app, you would cannot find a 
 comparable solution form a single vendor today.

 -Chris

 -Original Message-
 From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On 
 Behalf Of Arian J. Evans
 Sent: Tuesday, July 28, 2009 1:41 PM
 To: Matt Fisher
 Cc: Kenneth Van Wyk; Secure Coding
 Subject: Re: [SC-L] IBM Acquires Ounce Labs, Inc.

 Right now, officially, I think that is about it. IBM, Veracode, and
 AoD (in Germany) claims they have this too.

 As Mattyson mentioned, Veracode only does static binary analysis (no
 source analysis). They offer dynamic scanning but I believe it is
 using NTO Spider IIRC which is a simplified scanner that targets
 unskilled users last I saw it.

 At one point I believe Veracode was in discussions with SPI to use WI,
 but since the Veracoders haunt this list I'll let them clarify what
 they use if they want.

 So IBM: soon.

 Veracode: sort-of.

 AoD: on paper

 And more to come in short order no doubt. I think we all knew this was
 coming sooner or later. Just a matter of when.

 The big guys have a lot of bucks to throw at this problem if they want
 to, and pull off some really nice integrations. Be interesting to see
 what they do, and how useful the integrations really are to
 organizations.

 --
 Arian Evans





 On Tue, Jul 28, 2009 at 9:29 AM, Matt Fisherm...@piscis-security.com wrote:
 Pretty much. Hp /spi has integrations as well but I don't recall devinspect 
 ever being a big hit.  Veracode does both as well as static binary but as 
 asaas model. Watchfire had a RAD integration as well iirc but it clearly 
 must not haved had the share ounce does.

 -Original Message-
 From: Prasad Shenoy prasad.she...@gmail.com
 Sent: July 28, 2009 12:22 PM
 To: Kenneth Van Wyk k...@krvw.com
 Cc: Secure Coding SC-L@securecoding.org
 Subject: Re: [SC-L] IBM Acquires Ounce Labs, Inc.


 Wow indeed. Does that makes IBM the only vendor to offer both Static
 and Dynamic software security testing/analysis capabilities?

 Thanks  Regards,
 Prasad N. Shenoy

 On Tue, Jul 28, 2009 at 10:19 AM, Kenneth Van Wykk...@krvw.com wrote:
 Wow, big acquisition news in the static code analysis space announced today:

 http://news.prnewswire.com/DisplayReleaseContent.aspx?ACCT=104STORY=/www/story/07-28-2009/0005067166EDATE=


 Cheers,

 Ken

 -
 Kenneth R. van Wyk
 KRvW Associates, LLC
 http://www.KRvW.com

 (This email is digitally signed with a free x.509 certificate from CAcert.
 If you're unable to verify the signature, try getting their root CA
 certificate at http://www.cacert.org -- for free.)






 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___


 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___


 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non

Re: [SC-L] IBM Acquires Ounce Labs, Inc.

2009-08-04 Thread Arian J. Evans
Great answer, John. I especially like your point about web.xml.

This goes dually for black-box testing. There would be a lot of
advantage to being able to get (and compare) these types of config
files today for dialing in BBB (Better Black Box vs. blind black box)
testing. I don't think anyone is doing this optimally now. I know I am
eager to find static analysis that can provide/guide my BBB testing
with more context. I definitely think we will see more of these
combined-services evolve in the future. It only makes sense,
especially given some of the context-sensitive framing considerations
in your response.

Thanks for the solid thoughts,

-- 
Arian Evans





On Wed, Jul 29, 2009 at 5:44 AM, John Stevenjste...@cigital.com wrote:
 All,

 The question of Is my answer going to be high-enough resolution to support 
 manual review? or ...to support a developer fixing the problem? comes down 
 to it depends.  And, as we all know, I simply can't resist an it depends 
 kind of subtlety.

 Yes, Jim, if you're doing a pure JavaSE application, and you don't care about 
 non-standards compilers (jikes, gcj, etc.), then the source and the binary 
 are largely equivalent (at least in terms of resolution) Larry mentioned 
 gcj.  Ease of parsing, however, is a different story (for instance, actual 
 dependencies are way easier to pull out of a binary than the source code, 
 whereas stack-local variable names are easiest in source).

 Where you care about a whole web application rather than a pure-Java 
 module, you have to concern yourself with JSP and all the other MVC 
 technologies. Placing aside the topic of XML-based configuration files, 
 you'll want to know what (container) your JSPs were compiled to target. In 
 this case, source code is different than binary. Similar factors sneak 
 themselves in across the Java platform.

 Then you've got the world of Aspect Oriented programming. Spring and a 
 broader class of packages that use AspectJ to weave code into your 
 application will dramatically change the face of your binary. To get the same 
 resolution out of your source code, you must in essence 'apply' those point 
 cuts yourself... Getting binary-quality resolution from source code  
 therefore means predicting what transforms will occur at what point-cut 
 locations. I doubt highly any source-based approach will get this thoroughly 
 correct.

 Finally, from the perspective of dynamic analysis, one must consider the 
 post-compiler transforms that occur. Java involves both JIT and Hotspot 
 (using two hotspot compilers: client and server, each of which conducting 
 different transforms), which neither binary nor source-code-based static 
 analysis are likely to correctly predict or account for. The binary image 
 that runs is simply not that which is fed to classloader.defineClass[] as a 
 bytestream.

 ...and  (actually) finally, one of my favorite code-review techniques is to 
 ask for both a .war/ear/jar file AND the source code. This almost invariable 
 get's a double-take, but it's worth the trouble. How many times do you think 
 a web.xml match between the two? What exposure might you report if they were  
 identical? ... What might you test for If they're dramatically different?

 Ah... Good times,
 
 John Steven
 Senior Director; Advanced Technology Consulting
 Direct: (703) 404-5726 Cell: (703) 727-4034
 Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

 Blog: http://www.cigital.com/justiceleague
 Papers: http://www.cigital.com/papers/jsteven

 http://www.cigital.com
 Software Confidence. Achieved.


 On 7/28/09 4:36 PM, ljknews ljkn...@mac.com wrote:

 At 8:39 AM -1000 7/28/09, Jim Manico wrote:

 A quick note, in the Java world (obfuscation aside), the source and
 binary is really the same thing. The fact that Fortify analizes
 source and Veracode analizes class files is a fairly minor detail.

 It seems to me that would only be true for those using a
 Java bytecode engine, not those using a Java compiler that
 creates machine code.

 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] IBM Acquires Ounce Labs, Inc.

2009-07-28 Thread Arian J. Evans
Right now, officially, I think that is about it. IBM, Veracode, and
AoD (in Germany) claims they have this too.

As Mattyson mentioned, Veracode only does static binary analysis (no
source analysis). They offer dynamic scanning but I believe it is
using NTO Spider IIRC which is a simplified scanner that targets
unskilled users last I saw it.

At one point I believe Veracode was in discussions with SPI to use WI,
but since the Veracoders haunt this list I'll let them clarify what
they use if they want.

So IBM: soon.

Veracode: sort-of.

AoD: on paper

And more to come in short order no doubt. I think we all knew this was
coming sooner or later. Just a matter of when.

The big guys have a lot of bucks to throw at this problem if they want
to, and pull off some really nice integrations. Be interesting to see
what they do, and how useful the integrations really are to
organizations.

-- 
Arian Evans





On Tue, Jul 28, 2009 at 9:29 AM, Matt Fisherm...@piscis-security.com wrote:
 Pretty much. Hp /spi has integrations as well but I don't recall devinspect 
 ever being a big hit.  Veracode does both as well as static binary but as 
 asaas model. Watchfire had a RAD integration as well iirc but it clearly must 
 not haved had the share ounce does.

 -Original Message-
 From: Prasad Shenoy prasad.she...@gmail.com
 Sent: July 28, 2009 12:22 PM
 To: Kenneth Van Wyk k...@krvw.com
 Cc: Secure Coding SC-L@securecoding.org
 Subject: Re: [SC-L] IBM Acquires Ounce Labs, Inc.


 Wow indeed. Does that makes IBM the only vendor to offer both Static
 and Dynamic software security testing/analysis capabilities?

 Thanks  Regards,
 Prasad N. Shenoy

 On Tue, Jul 28, 2009 at 10:19 AM, Kenneth Van Wykk...@krvw.com wrote:
 Wow, big acquisition news in the static code analysis space announced today:

 http://news.prnewswire.com/DisplayReleaseContent.aspx?ACCT=104STORY=/www/story/07-28-2009/0005067166EDATE=


 Cheers,

 Ken

 -
 Kenneth R. van Wyk
 KRvW Associates, LLC
 http://www.KRvW.com

 (This email is digitally signed with a free x.509 certificate from CAcert.
 If you're unable to verify the signature, try getting their root CA
 certificate at http://www.cacert.org -- for free.)






 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___


 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Questions asked on job interview for application security/penetration testing job

2009-03-22 Thread Arian J. Evans
On Sat, Mar 21, 2009 at 2:43 PM, Matt Parsons mparsons1...@gmail.com wrote:

 I was asked the following questions on a job phone interview and wondered
 what the proper answers were.   I was told their answers after the
 interview. I was also told that the answers to these questions were one or
 two word words.  In the beginning of next week I will post what they told me
 were the proper answers.   Any references would be greatly appreciated.

Looks simple enough. Were there tricks to it? Some companies play
games with these type of interviews. (Google)

I empathize with brevity. Usually when people ramble too long in
interviews they don't know what they are talking about (and are extra
nervous because of this).

So what are the word answers?


 1.  What are the security functions of SSL?

Transport layer security. Asymmetric public key, symmetric private
key, blah blah


 2.  What is a 0 by 90 bytes error.

Error? 0x90 is NOP. A bunch of them make a good sled.


 3.  What is a digital signature, Not what it is?

Authentication


 4.  What is the problem of having a predictable sequence of bits in TCP/IP?

Session Prediction (leads to etc. etc.)


 5.  What is heap memory?

Pooled memory dynamically allocated, no fixed-life


 6.  What is a system call?

Software call to underlying OS function ( FileOpen())


 7.  what is two factor authentication?

Two of something you have, know, or are



-- 
Arian Evans

Let me issue and control a nation's money, and I care not who writes its laws

--Mayer Amchel Rothschild

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] SDL / Secure Coding and impact on CWE / Top 25

2009-01-29 Thread Arian J. Evans
I think that you are spot on, and people are sooner than
later going to be demanding that, as a by-product of our
shrinking economic reality.

Take this example (not to stir up a semantic pissing match):

Insufficient Input Validation

I get it. I understand the importance of it. But it is not
clear to a business owner or C level what that means
to execute on. It is fairly ambiguous, especially in
Web 2.0 world, what that really means. And often
you find in the slippery slope between design and
pattern issues, and implementation-level defects,
that your fundamental data/function boundary problem
is *not* best solved/enforced via input validation.

In the case where the ideal solution to enforce a
data/function boundary is parameterized sql, or
encoded output (or a separate data/control channel),
IV in this case simply functions as an attack surface
reduction mechanism and the costs associated in
defining and enforcing what may be loosely typed
data by requirement, can be negligible at best.

Obviously this is highly contextual and YMMV on
that slippery slope between changing a fundamental
design to fixing singular implementation errors.
Architecture will play a huge key in figuring cost.

But if I have to pick one in a weak data access scenario:

1) Stronger IV/data typing (before queries are built)

2) Parameterized SQL or abstracted data access layer

3) Principle of least privilege definition and strict CRUD enforcement
(by objects accessing data)


We would all like to know which has the most return.

I think that will be tough though. I've found cases
where some simple CRUD tweaking mitigated all
negative impact to successful syntax attacks.

And I've found cases where it did not help at all,
or due to design, simply was not possible to make
meaningful priv separations.

How's that for a verbose Yes?

Good questions.

ciao

-- 
Arian Evans

I ask, sir, what is the militia? It is the whole
people. To disarm the people is the best and
most effectual way to enslave them.
-- Patrick Henry



On Wed, Jan 28, 2009 at 3:20 PM, Steven M. Christey
co...@linus.mitre.org wrote:

 In the past year or so, I've been of a growing mindset that one of the
 hidden powers of CWE and other weakness/bug/vulnerability/attack
 taxonomies would be in evaluating secure coding practices: if you do X and
 Y, then what does that actually buy you, in terms of which vulnerabilities
 are fixed or mitigated?  We capture some of that in CWE with CAPEC
 mappings for attacks.

 We've also mapped to the CERT C Secure Coding standard, as reflected in
 this CWE view: http://cwe.mitre.org/data/graphs/734.html (for the
 complete/detailed listing, click the Slice button on the upper right and
 sift through the Taxonomy Mappings).  Or, check out the coverage graphs
 that show where the coding standard fits within the two main CWE
 hierarchical views: http://cwe.mitre.org/data/pdfs.html

 Now Microsoft has released a paper that shows how their SDL practices
 address the Top 25, like they did when the OWASP Top Ten came out.  To me,
 this seems like a productive practice and a potential boon to consumers,
 *if* other vendors adopt similar practices.  Are there ways that the
 software security community can further encourage this type of thing from
 vendors?  Should we?

 Gary, do your worst ;-)

 http://blogs.msdn.com/sdl/archive/2009/01/27/sdl-and-the-cwe-sans-top-25.aspx

 - Steve
 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] SANS/CWE Top 25: The New Standard for Webappsec

2009-01-19 Thread Arian J. Evans
On Mon, Jan 19, 2009 at 9:45 AM, Stephen Craig Evans
stephencraig.ev...@gmail.com wrote:

 Hi Arian,

  SANS has spoken and I think that is a pretty clear indication what is
 going on)

 Have you been watching Wizard of Oz re-reruns again? This sentence sounds
 too much like The Mighty Oz has spoken :-)

I am from Kansas, Stephen. How did you know?

On a serious note:

I have tremendous respect for the SANS organizations'
work and the value they provide to the infosec community.

I believe they are one of the best barometers of what
is going on out in day-to-day security-land. In addition
they have significant clout with information security
professionals ranging from technical  implementation
engineers, to tactical security management and auditors,
to strategic level CISOs and policy compliance folks.

They have a lot more clout across the board with all
of those folks for infosec in general than the combined
communities of OWASP, WASC, Mitre, and the denizens
of the SCL list. /strong_suspicion: educated_guess

Translation: we should all watch closely and take cues
from how SANS uses our software security publication
output, be it Top N lists or standards or whatever.

SANS and their many tentacles are market driven
both with regards to private sector and government.
They will react to needs and provide them, and have
a clear idea what folks want.

In this case what is wanted is CLEARLY a minimum
standard of due care and SANS will use such a list
accordingly, much as previous SANS Top N lists.

What this means to the rest of us I pretty much
covered in my last post.

I have gotten a deluge of email in response to my
posts to both SCL and WASC about SANS/CWE
Top 25 from folks at organizations that have already
had their bosses ask -- or even implement -- the
CWE Top 25 as a standard of some type in
their organization.

Numerous customers I interact with are already
asking me to cross-map the CWE/SANS Top 25
with existing web application security lists. (OWASP
Top 10, WASC Threat Classification, etc.)

My previous email lists the type of uses I am
already seeing.

First, the list should be webified. That is probably
the #1 interest in consumption of that data. There
are a finite number of programmers working at
Microsoft on their network stack in C++, and they
are already way beyond this level. We're not putting
out information for them.

The majority of crappy software today is being
built as web systems or embedded software. Two
very different problem domains in terms of threat
landscape and attack surface (though overlap
in basic data handling principles).

Then, again, you need three lists:

+ stuff to test for
+ patterns and practices to build secure
+ how to address software security in an enterprise

The current Top 25 is kinda a bastard mix of
all three of those, and solves none of them well.

Sorry to stir people up, but this CWE list just
created a headache and more work for me that
I do not see improves upon anything I am already
working on or providing.

(Besides global attention -- proving again my
assertion that folks are hungry for more)

Thanks all,

-- 
-- 
Arian Evans

I ask, sir, what is the militia? It is the
whole people. To disarm the people is
the best and most effectual way to
enslave them.-- Patrick Henry
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] SANS/CWE Top 25: The New Standard for Webappsec

2009-01-17 Thread Arian J. Evans
Hello all. Xposting to SCL and WASC:

Following-up to my commentary on the
WASC list about the SANS/CWE Top 25

I have repeatedly confirmed that the SANS/CWE
Top 25 is being actively used, and growing in
use, as a Standard.

I understand the spirit of intent and that the
makers are not accountable for how it is used,
but we need to be realistic about how it is
being implemented in the real world *now*.

It is beginning to be used as a standard for:
* what security defects to test software for
* how to measure the security quality of software
* how to build secure software
* what to teach developers about coding securely


I have confirmed this with:
* peers
* corporations
* state governments
* software security solutions vendors
* customers

We are already seeing RFPs for products
and services, management and auditor
created internal standards, and requests
for training and reporting using the SANS/
CWE Top 25 as a standard.

There are three goals of this post:

1) to make very clear to all involved that
what is being built with the Top 25 list is
a minimum standard of due care.

2) To suggest that this is (most likely) how
it is primarily going to be used.

(You brought the SANS/CIS club to the dance here...)

3) Suggest that future versions be re-focused
on building actual minimum standards of
due care for the demonstrated needs.

The great thing that is coming out of this Top 25
experiment is to identify that awareness and
hunger-level for material like this is very high.

This is also showing us what people really want
right now:

People want a minimum standard of due care.

It is obvious people want bite-sized digestible
snippets to use as guidelines for making and
measuring the security quality of our software.

That is evidenced by how rapidly people have
latched onto this new list. (one week + !)

The SANS and Mitre brand have huge stock in
the mainstream, non-appsec security community,
much larger than OWASP and WASC, as is
evidenced again by the attention this is getting
throughout the infosec and audit communities.

And summing up, directly from Alan Paller:

http://searchsecurity.techtarget.com/news/article/0,289142,sid14_gci1344962,00.html


Conclusions:

We need a minimum standard of due care Top N list.


We really need THREE minimum standards of due care:

1) Top N issues/defects to test your software for
2) Top N principles to build secure software
3) Top N strategies to improve software security in your enterprise

Webappsec folks should make webappsec
versions, or else we will all wind up using
the same ones for *everything*.

We might want to divide/share efforts between
organizations and cross-reference each other
for maximum (positive) effect. We could likely
leverage each others' work and try to unify
our language across appsec communities.

(Ideologies and pet naming systems are where
these efforts always break down in our group.)


I am avoiding the debate over the inherent
problems with Top N and bug parade approaches
in general.  People are letting us know what they
want and I think we should solve that need.

...or they will take whatever we give them for
other purposes and use it to solve that need,
partially, improperly, ineffectively.

I will quite my bitching about the Top 25 and
focus on productively moving forward, now that
it's clear my concerns are too late and it's
already moving full-steam ahead as a standard.

People do not know what to do. They have
a serious problem that is starting to cause
them to lose real sleep and real money, and
they are looking to us for suggestions and
guidance as to what to do.

I concede that the Top 25 in this regard is
better than nothing, but it's not really what
people want or need right now (IMHO).

(Note: I have not asked parties involved
if I can quote them or quote facts of this
being used as a standard. The volume
of emails I am receiving providing examples
of this make me think this is either a fad,
or self-evident and you will all see plenty
of examples of this very soon if you
have not already.

SANS has spoken and I think that is
a pretty clear indication what is going on)

$0.02 USD,

-- 
-- 
Arian Evans

Anti-Gun/UN people: you should weep for
Mumbai. Your actions leave defenseless dead.

Among the many misdeeds of the British
rule in India, history will look upon the Act
depriving a whole nation of arms, as the
blackest. -- Mahatma Gandhi
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] InternetNews Realtime IT News - Merchants Cope With PCI Compliance

2008-07-01 Thread Arian J. Evans
Gunnar -- agreed. And for all the fake security in the
name of PCI going on right now out there -- let's also
keep in mind that it is completely valid and legitimate
to attempt to operationalize software security.

We scoff because to date it hasn't been done well (at all).
That is just as much a technology as people problem.

I know WAFS can be used fairly effectively. The recent SQL
Injection bots, and folks who survived them through attack-
vector filtering, are good examples of increased survivability
through use of this technology.

I suspect there's a backlash coming to the magic-pizza-box
WAF vendors. The magic elf inside auto protection just
does not work in most enterprise scenarios.

Tangential to PCI -- the self-proclaimed top vendor in the
PCI WAF space with super-auto-learning is losing several
top accounts I've confirmed, from VARs and customers directly.
Including customers on their case studies page.

The customers ditching the auto-learning WAF are
still using a WAF. They are just replacing it with a
different kind of WAF.

The two approaches I see being investigated as part
of a WAF 2.0 strategy are:

(a) virtual patching e.g.- only protecting things known to be weak, and

(b) Fortify's code-shim WAF approach.

Disclaimer: I work on a solution of type (a).

Agreed on the people problem. There's a technology
problem here too, though. And it's not a small one.

Many of us throw out the baby with the bathwater due
to the technology problem and the insane vendor
marketing around it we've been dealing with for years.

When many of our technology solutions still don't do
what they say they have been able to do for 4 or 5
years, maybe it's time to start blaming some new people.

-- 
-- 
Arian J. Evans.
Software. Security. Stuff.



On Mon, Jun 30, 2008 at 7:17 AM, Gunnar Peterson [EMAIL PROTECTED] wrote:
 for the vast majority of the profession - slamming the magic pizza box in a 
 rack
 is more preferable than talking to developers. in many cases the biggest 
 barrier
 to getting better security in companies is the so-called information security
 group. it has very little to do with technology, its a people problem.

 -gp

 Kenneth Van Wyk wrote:
 Happy PCI-DSS 6.6 day, everyone.  (Wow, that's a sentence you don't hear
 often.)

 http://www.internetnews.com/ec-news/article.php/3755916

 In talking with my customers over the past several months, I always find
 it interesting that the vast majority would sooner have root canal than
 submit their source code to anyone for external review.  I'm betting PCI
 6.6 has been a boon for the web application firewall (WAF) world.


 Cheers,

 Ken

 -
 Kenneth R. van Wyk
 SC-L Moderator
 KRvW Associates, LLC
 http://www.KRvW.com




 

 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___
 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Lateral SQL injection paper

2008-04-29 Thread Arian J. Evans
So I'd like to pull this back to a few salient points. Weirdly,
some folks seem quick to dismiss the paper with a
didactic shot of folks shouldn't code that way anyway
which has nothing to do with the subject.

1. I think everyone on SC-L gets the idea of strong
patterns and implementations, and why parameterized
SQL is a good thing, and why cached queries are also
a good thing (for performance, at least, and security if
by doing so you enforce avoidance of EXEC())

2. David's paper is interesting, because out in the real
world people do not, and sometimes cannot, follow
ideal patterns, command patterns, and or implementations
that are safe. (e.g. delegation of privilege on Windows
accessing the DB for security inheritance vs. the negative
impact to thread pooling and process safety -- it is
a real tradeoff, and *never* made on the side of security)

David's paper is interesting because out in the real
world people still follow many borderline unsafe practices
and understanding new attack vectors is essential to
assessing risk, and understanding whether refactoring,
or hofixing, vs. logging, filtering, or *ignoring* the code,
is the right business choice to make.

David's example is more CVE instance than CWE class.

--

Steven, I like the grouping of your two main abstractions
below; for purpose of discussion  education I like to  put
these together a little differently into Semantic and Syntax
software security-defect buckets. I'm curious what your
thoughts are (and take this offline if the response is too tangential)


1. Semantic -- I place message structure, delimiting,
and all entailments of semantic conversation, including
implications of use-case and business rules here, where
the latter relate to enforcing specific semantic user/caller-
dialogues with the application.

I place callee requirement to enforce workflow, order,
message structure, state and sequence, and *privilege* here.

2. Syntax -- at heart we have a data/function boundary
problem, right? And most modern implementation level
languages do not give us constructs to address/enforce
this, so all our cluged workarounds, from stack canaries
to crappy \ escaping in SQL to attempts to use of HTML
named entities to encode output, fall into this grouping.

I place in callee requirements everything to do with
message encoding, canonicalization, buffer and
case e.g.- all syntax issues, into this grouping.

Now, arguably you could call a buffer or heap overflow
semantic, if you argue it's privilege related, but I
would say that is a result of language defects (or
realities) and it still starts syntactically.

Where would you put the recent URI-handler issues
in this structure?

Why did you specify privilege burden on the caller?

I tend to leave out/ignore the caller responsiblities
when I am thinking of software. This could be a
bias of predominantly web-centric (and db client/server
where I don't control the client) programming and
design over the years.

While it makes sense to enforce some syntax
structure upon the caller, in general I tend to
put all semantic responsibilities upon the callee,
and even assume the callee should enforce
some notion of syntax requirements upon
the caller, and feed said back to caller.

-- 
-- 
Arian J. Evans.

I spend most of my money on motorcycles, mistresses, and martinis. The rest
of it I squander.



On Tue, Apr 29, 2008 at 9:10 AM, Steven M. Christey [EMAIL PROTECTED]
wrote:


 On Tue, 29 Apr 2008, Joe Teff wrote:

   If I use Parameterized queries w/ binding of all variables, I'm 100%
   immune to SQL Injection.
 
  Sure. You've protected one app and transferred risk to any other
  process/app that uses the data. If they use that data to create dynamic
  sql, then what?

 Let's call these using apps for clarity of the rest of this post.

 I think it's the fault of the using apps for not validating their own
 data.

 Here's a pathological and hopefully humorous example.

 Suppose you want to protect those using apps against all forms of
 attack.

 How can you protect every using app against SQL injection, XSS, *and* OS
 command injection?  Protecting against XSS (say, by setting  to gt;
 and other things) suddenly creates an OS command injection scenario
 because  and ; typically have special meaning in Unix system() calls.
 Quoting against SQL injection \' will probably fool some XSS protection
 mechanisms and/or insert quotes after they'd already been stripped.

 As a result, the only safe data would be alphanumeric without any spaces -
 after all, you want to protect your user apps against whitespace,
 because that's what's used to introduce new arguments.

 But wait - buffer overflows happen all the time with long alphanumeric
 strings, and Metasploit is chock full of alpha-only shellcode, so
 arbitrary code execution is still a major risk.  So we'll have to trim the
 alphanumeric strings to... hmmm... one character long.

 But, a one-character string will probably be too short for some using
 apps

Re: [SC-L] Lateral SQL injection paper

2008-04-28 Thread Arian J. Evans
David's papers are always interesting, but I think
the most interesting thing is that we are starting
to see advanced SQL injection like his recent
work on cursor attacks/snarfing being used in the
wild in mass-SQL injection exploits.

Attackers  are using multiple layers of encoding for
both reliability of delivery, and and for obfuscation
(for all of you that rolled your eyes every time
I've talked about this for the last five years :) and
as a result are bypassing interface input validation
and blacklists.

The attackers are using common stuff for filter
evasion like using char(127), then hex URI-
escaping or hex encoding that (with \hex,
HTML entity, decimal, whatever), and then
sometimes URI encoding every character
of the resultant string.

Internal parsers canonicalize down to the
SQL interpretable string (e.g.char(127))
and the SQL parser obviously makes a
nice ' out of that.

There are a lot more clever things going
on with the exploits, some of which could
be restricted by simple privilege.

Anyway -- the black hat crowd is paying as
much attention to the lastest exploit techniques,
if not more, than most of us are. They are using
them in the wild, right this second, to make money.

Interesting work by David, for sure, and
great ammo if we have to beat the strong
data typing drum in our software.

-- 
-- 
Arian J. Evans, software security stuff.

I spend most of my money on motorcycles, mistresses, and martinis. The
rest of it I squander.



On Mon, Apr 28, 2008 at 12:13 PM, Kenneth Van Wyk [EMAIL PROTECTED] wrote:
 Greetings SC-Lers,

  Things have been pretty quiet here on the SC-L list...

  I hope everyone saw David Litchfield's recent announcement of a new
 category of SQL attacks.  (Full paper available at
 http://www.databasesecurity.com/dbsec/lateral-sql-injection.pdf)

  He refers to this new category as lateral SQL injection attacks.  It's
 very different than conventional SQL injection attacks, as well as quite a
 bit more limited.  In the paper, he writes:

  Now, whether this becomes exploitable in the normal sense, I doubt
 it... but in very
  specific and limited scenarios there may be scope for abuse, for example in
 cursor
  snarfing attacks -
 http://www.databasesecurity.com/dbsec/cursor-snarfing.pdf..

  In conclusion, even those functions and procedures that don't take user
 input can be
  exploited if SYSDATE is used. The lesson here is always, always validate
 and don't let
  this type of vulnerability get into your code. The second lesson is that no
 longer should
  DATE or NUMBER data types be considered as safe and not useful as injection
 vectors:
  as this paper has proved, they are. 


  It's definitely an interesting read, and anyone doing SQL coding should
 take a close look, IMHO.  It's particularly interesting to see how he alters
 the DATE and NUMBER data types so that they can hold SQL injection data.
 Yet another demonstration of the importance of doing good input validation
 -- preferably positive validation.  As long as you're doing input
 validation, I'd think there's probably no need to back through your code and
 audit it for lateral SQL injection vectors.

  Anyone else have a take on this new attack method?  (Note that I don't
 normally encourage discussions of specific product vulnerabilities here, but
 most certainly new categories of attacks--and their impacts on secure coding
 practices--are quite welcome.)


  Cheers,

  Ken

  -
  Kenneth R. van Wyk
  SC-L Moderator

  KRvW Associates, LLC
  http://www.KRvW.com

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Silver Bullet turns 2: Mary Ann Davidson

2008-04-04 Thread Arian J. Evans
Mary -- Thank you for your reply and clarification.

I am 100% on board with you about folks inventing
taxonomies and then telling business owners and
developers what artifacts they need to look for,
measure, etc. without any real cost or business
justification with regards to to your costs vs. return
to track/measure in the first place, let alone *why*
to fix them.

I lumped a few things together which I shouldn't have...

Defect decision making: Most organizations involved
in the care and feeding of business software, whether or
not they produced it, are still stuck at the instance level of
do I fix this vulnerabilitly/asset now or later?

You are at an entirely different level as a large producer.

As one of the largest ISVs -- you likely have very different
needs and perspectives. e.g.- you might be able to bear
the cost of deeper software defect analysis than most
software consumer organizations, due to the huge cost
of regression testing and/or the support costs of post-hotfix
deployments if they break things.

The above is just a guess on my part, but we all know that
hotfixing running production database servers is not the
same game as implementing output encoding on a web
application search field. /cost/effort/risk

Anyway, long and short, like Andrew and others -- those
of us in the pragmatic good enough  daily increments
security camps would love for you to share more of your
experiences with this, Oracle's SDL journey, etc. and
add that to our growing pool of what we know about this.

A lot of what I mentioned we need to gather/answer is
really more for the consumers of your software, writing
their custom code on top of it, and then trying to operate
in a reasonably safe manner while making money.

As an ISV you have a slightly different operating model
and bottom line, and you probably are afforded less
ability to use temporary band-aids or mitigation steps
versus flat-out fixing your software. Not everyone has
to fix all their software, or make it self-defending.

Which is ironic considering I used to present on the
notion of self-defending web applications', and help
people implement SDLs. But I've become less of a
purist now that I'm trying to help people secure
dozens to hundreds of applications at once, with
limited time and budget.

It's hard not to feel like a charlatan when you can't
give the business folks hard metrics on defect
implications, failure costs, let alone just the cost
of measurement vs. potential return.

Thank you for doing Gary's interview, and the
stimulating thoughts.

Have a good weekend all. Come see me at
RSA if any of you SC-L'ers are around. I'll be
putting people to sleep with a talk on encoding,
transcoding, and cannonicalization issues
in our software (and WAFs :).
(At RSA. Lol.)

Ciao

-ae




On Fri, Apr 4, 2008 at 2:50 PM, mary.ann davidson
[EMAIL PROTECTED] wrote:

  Hi Arian

  Thanks for your comments.

  Just to clarify, I was not trying to look at this at the micro level of
 should we fix buffer overflows or fix SQL injections? We (collectively)
 now have pretty good tools that can find exploitable defects in software and
 the answer is, if it is exploitable you fix it, and if it is just poor
 coding practice (but not exploitable) you still probably should fix it,
 albeit maybe not with same urgency as exploitable defect and you may only
 fix it going forward.

  My issue is people who invent 50,000 idealogically pure development
 practices, or artifacts or metrics that someone might want you to produce
 (often, somoene who is an academic or a think tank person), and never look
 at Ok, what does it cost me to capture that information? Will being able to
 measure X or create a metric help me actually manage better?  If I can't
 have a theologically perfect development process (and who does?), then what
 are the best things to do to actually improve? The perfect is really the
 enemy of the good or the enemy of the better.

  I like a lot of your suggestions after  We really need to know I do
 realize that as you close one attack vector, persistent enemies will try
 something new. But one of the reasons I do feel strongly about get the
 dreck out of the code base is that, all things being equal, forcing
 attackers to work harder is a good thing. Also, reducing customers' cost of
 security operations (through higher security-worthy, more resilient code)
 is a good thing because resources are always constrained, and the resources
 people spend on patching, and/or random third party add security
 appliances and software takes scarce resources that might be put to better
 uses.

  If the Army has tank crews of 12, and 10 of them are busy fixing the tank
 treads because they keep slipping off, they aren't going to be too ready to
 fight an actual battle.

  Regards -

  Mary Ann



  Arian J. Evans wrote:

  I'll second this Gary. You've done nice work here.

 I think Mary Ann's comments are some of the most
 interesting concerning what our industry needs

Re: [SC-L] Silver Bullet turns 2: Mary Ann Davidson

2008-04-04 Thread Arian J. Evans
I'll second this Gary. You've done nice work here.

I think Mary Ann's comments are some of the most
interesting concerning what our industry needs to
focus on in the near future. (and I'd love to see you
focus on this with your series)

Her comments reminded me of a discussion on this
list with Wysopal a year or so ago.

Wysopal described defect detection in manufacturing,
and gave software security analogues. This resonates
with me as I'm fond of using analogues to airplane
manufacturing, construction, and testing to explain
what a software SDL should look like, and where/why
whitebox, blackbox, and static analysis could and
should fit in a mature SDL model. And how you
evaluate/prioritize assets and costs.

Important to this discussion is the vastly different
degrees of defect detection we perform on human
transport planes versus, say model airplanes,
where the threat profile is (obviously) reduced.

Yet in software, security folks at many organizations
have a very hard time deciding which planes have
more valuable payloads. What defect to address?

The buffer overflow on the Tandems (that no one
knows how to write shellcode for)? The SQL injection
on the public content DB? The non-persistent XSS
buried deep in the application? HTTP control-character
injection in the caching proxy? Prioritization difficulty...

Mary Ann is asking for metrics, including cost, and
some measure of threat profiles from which to
calculate or estimate the risk presented by defects.

Ultimately I think we all want some sort of priority
weighting by which to make better decisions about
which defects are important, what our executive
next-actions are, what to filter, etc.

But this can't come from software analysis. This
must come from organizational measurement data.

The problem is, nobody in our industry has good
data on what those metrics are. Many of us have
little slices, but there is no universal repository
for sharing all of that information (which is needed).

The software-security marketing industry has relied
as a crutch on software-defect cost-analysis studies
done by large accounting consulting firms that have
a vested interest in puffing up the discovered costs,
so that they can sell multi-million-dollar, multi-year,
software defect-reduction projects to their clients
that are reading those reports.

I don't think those reports have much to do with
our industry. (We're all aware of the we reduced
1 bug per 10 lines of code saving the customer
millions games those consultancies play, right?)

We really need to know:

Threat:
+ What are the attackers going after?
+ What is being attacked and how often?
+ What is being lost?


Attack Surface:
+ What attacks are successful and how often?
+ What are the most common attacks?
+ What are the most common/successful targets?

Then we need to map this back into software
design patterns and implementation practices,
and generate some metrics around costs to:

+ hotfix  support
+ release-fix and regression test
+ redesign and re-implement

In the manufacturing world, the people who
analyze this data are the actuaries or the
government regulatory agencies. We have
neither insurance or regulation to drive this.

We also need to see if it really costs more
to fix code after-the-fact, versus avoidance
up front. I suspect this up-front avoidance is
still cheaper with unmanaged code products,
and maybe with all design issues.

In the web software world, though, I think this
is not always the case.

(I suspect in the web world, especially with
modern frameworks, it is just as cheap
and easy to refactor/hotifx post-production
software as it is to catch defects before final
implementation or shipping code)

Anyway -- there's a lot that has to happen
to provide Mary with the data she wants
(and indeed any smart CSO in an ISV
should be asking for).

In fact -- it's amazing we've improved so
far, so fast, without even knowing what
the tensile strength of our materials is,
or having a concrete notion of what the
cost of  failure is in a final product.

So where will this come from?

Insurance?

Regulation?

Industry self-policing project?

Is someone here working on this today?

Cheers. Ciao

-- 
-- 
Arian Evans, software security stuff



On Thu, Apr 3, 2008 at 10:19 PM, Stephen Craig Evans
[EMAIL PROTECTED] wrote:

 Gary,

 Great interview. You've had some powerhouse interviews recently, for example
 with Chris Wysopal (my dream is that a static tool can fix business logic
 flaws) and Ed Amoroso (security researchers are the bomb defusers of the
 Internet).

 I laughed at your blunt comment: that would be great (everybody doing
 software assurance in 5 years) but also impossible.

 Andrew, in addition to your points:

 - I liked her self-deprecating humor when she talked about her coding skills

 - I think she made a justified, underhanded jab at the appsec community to
 make our stuff easier to use when she said:
 (At 4m 55sec) There are a lot of people who are very well-intended and 

[SC-L] Software security definition(s)

2008-03-13 Thread Arian J. Evans
I hate to start a random definition thread, but Ben asked me a good
question and I'm curious if anyone else sees this matter in the
same fashion that I do. Ben asked why I refer to software security
as similar to artifacts identified by emergent behaviors:

   Software security is an emergent behavior that changes over time
   with software use, evolution, extension, and the changing threat
   landscape. It's not an art you get people inspired in.
  
  You keep using that phrase - emergent behavior - and I'm not sure what
  you mean.

So one of the biggest challenges for me was communicating
to any audience, and most importantly the business and
developers, what secure software/applications means.

Around 2000/2001 I was still fixated on artifacts in code
and in QA and secure software == strongly typed
variables with draconian input validation and character
set handling (canonicalization and encoding types).

Yet I continued to have problems with the word security
when talking to business owners and developers about software.

This is because you say security and business owners
see sandbags blocking their figurative river of profits. Corporate
and gov developers see sandbags stopping them from going
home and having dinner with the wife or playing WoW.
Startup developers just laugh.

I started using phrases like predictable and dependable software
instead of security. Giving examples of Rob's Report -- it has all
these user requirements it must meet to pass on to UAT, and if it
fails, blah blah. SQL injection is a failure of degree, and not of kind.
Same kind of failure as a type-mismatch error that stops the report
from running -- but huge difference in degree of impact.

Finally it dawned on me folks latch on to this secure software stuff
as features and requirements, anyone using waterfall gets drowned
in insecure software due to forever-pushed-back security features.

My experience also was that never, ever, is a Production app
deployment identical to dev regions, let alone QA stages, UAT, etc.

From a security posture: prod might be better *or* worse than the
other environments.

Yet even worse -- sometimes I'd test app A and app B for a company,
and they would both fair well when tested independently.

I'd come back a year later and the company would have bolted
them together through say some API or WS and now, together,
apps A and B were really weak when glued together. Usually this
was due to interfaces handling I/O that they weren't intended to.

Long and short of it -- it struck me that security is a measure
of behaviors. It is one quality of an application, like speed/
performance, but this quality is measured by the observed
behaviors, regardless of what the source code, binary, or
blueprint tells you...

Note -- I am not saying any of these things are not valuable.
There's things I'd still far rather find in source than black box,
and things binary tracing is brilliant at. I'm simply saying that
at the end of the day, the proof in the pudding is at run-time.

The same way we perform the final measure the quality of
commercial jets: I don't care about tensile strength exceeding
tolerance standards if the wings are falling off at run-time.

If someone compromises 1000 customer accounts, steals
their prescription data, and tells the zoloft folks who is
buying prozac so they can direct-market (real world example):
you have a defective application.

Those behaviors are always emergent -- meaning they can
only ultimately be measured at runtime in a given environment
with a given infra and extension (like plugging app B into app
A through some wonky API).

Sometimes it's the *caching* layer that allows you to insert
control characters that lead to compromising the rest of the
application, soup to nuts.

You won't find any hint of the caching layer in source, in binary,
in blueprint, in conversation with the devs, in dev or QA regions,
in staging and UAT, or by running your desktop or network VA
webapp scanner unauthenticated on the entry portal.

You might find it in documentation, or during threat modeling.

You will find it when you start measuring the emergent
behavior of a piece of otherwise well-vetted software
now sitting behind a weak caching layer and you
start observing wonky caching/response issues and
realize these behaviors are weak; you can attack them

What is secure software?

It is one quality of an application that can be measured
by the emergent behaviors of the software while trying to
meet and enforce its use-case in a given run-time environment.

This now fits back to the whole ideology discussion
that is sorely needed and overdue.

-- 
Arian Evans
software security stuff
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC 

Re: [SC-L] quick question - SXSW

2008-03-13 Thread Arian J. Evans
On Wed, Mar 12, 2008 at 3:05 PM, Andy Steingruebl [EMAIL PROTECTED] wrote:

  On a related note a quick perusal of the JavaOne conference tracks
  doesn't show a lot of content in this area either.  Is this due to a
  lack of interest, or people in the security world not pitching talks
  to the development conference organizer?

Both.

Java is a tricky one. There were security sessions early on in
Java conferences, but they were about the stuff no one on the
planet actually does -- e.g. container security, code signing,
and JVM/applet permissions.

I think that turned a lot of devs off of security in Java-land.

In related news we're building J2EE courseware in a by developers,
for developers fashion and Anurag will be releasing some APIs
for java developers to actually do things like output encoding,
where Java/J2EE is about 4 years behind the rest of the world.

I imaged later this year or next year you'll see a few of us focusing
on developer (versus security) conferences, though I don't think
this changes the business problem/reality at all.

-- 
Arian Evans
software security stuff
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] quick question - SXSW

2008-03-12 Thread Arian J. Evans
my responses inline

On Wed, Mar 12, 2008 at 6:08 PM, Benjamin Tomhave
[EMAIL PROTECTED] wrote:
 I think you misunderstood my points a little bit. SXSW was just a
  current conference example. As Gary's pointed out, there are many
  conferences. It's possible SXSW wasn't a good example, but it was meant
  more symbolically. More comments inline...

Oh, I did miss your point. Overall, I agree. I've had mixed experiences
leading me to re-evaluate my stance.

A security-unaware dev friend recently told me about Microsoft coming
to some conference and demonstrating this new SQL Injection thing
to them, and he told me how amazing and cool it was. He asked if I
did SQL Injection.

That's the first time in several years he's responded to what I've primarily
worked on for 8+ years, and incidentally for over 10, and told him about
over god-knows how many Guinness. I don't blame the Guinness. (who can?)


   They just don't care.
  
   They will never care.
  
  I fundamentally disagree. Everybody is the right crowd, assuming the
  message is tailored appropriately. It's precisely the perspective you
  espouse that concerns me greatly. I don't believe the security industry
  _as_a_whole_ has maintained momentum, and I attribute that directly to
  the SEP* effect. This goes directly to my larger point about ingraining
  security considerations/thoughtfulness/practices into all aspects of the
  business (not just coding, btw).

I think this approach is doomed to failure, though my thoughts and experiences
are mixed. Whilst I have quit evangelizing secure software, I do meet more
and more devs interested in software security -- whom were not merely 3 to
5 years ago. Something is definitely changing, but abstract interest in appsec
!= secure design  implementation.

While this isn't an argument -- just an observation -- I hear this
build security in
notion preached most often from the following:

(a) people new to the appsec industry
(b) academic-minded  PHD-type folks into taxonomies
(c) government folks/agencies out of touch with the business world
(d) eager kids just-out-of infosec college joining our industry
(e) people with livelyhood/agendas staked on these notions

Maybe I'm just jaded, but it doesn't seem to work in many, and
possibly most, cases. I think the the momentum is lost because
all these build security in and Secure SDLC things don't work
for a lot of people/organizations. I still have some suspicions
this may be due to implementation, but...

This industry cannot even get it's node-hierarchies right. Even
the mitre CWE is fraught with node-confusion betwixt attack
nodes, vulnerability nodes, and design  implementation weakness nodes.

But at the end of the day the business doesn't care.

Will this model of car sell and will we get sued over defects in it?

That's the world. If building secure cars was the answer Volvo
would have been a wild success many, many years ago.


  If everyone starts coding more responsibly, then at some point the genre
  of secure coding goes away, because it's inherent in everything that's
  written. Today, I'd settle for all externally-facing apps being coded to
  address the OWASP Top 10, and to get developers to think for a change
  before doing silly things like implementing client-side filtering in the
  client code.

Client-side filtering isn't silly. It's smart. You probably mean using it
as a security control, but it's that verbiage that arms legions of the
clueless appsec auditors now joining our industry that don't know
sh*t about software design or implementation, or business use-case,
and cause software professionals to scoff at our industry. I can't tell
you how many appsec reports I've seen that say don't use client
side validation -- it's dangerous and I start looking for more best
practice nonsense listed as vulnerabilities.

Don't allow dangerous characters in input. WTF?
Insufficient input validation. For whom?

I think I see your perspective though.

I think the answer is: IDEs that make it harder to shoot oneself in the
foot, secure frameworks, and secure environments (for all us text-editor
types) and maybe even newer languages with some real notion of a
data/function boundary -- those are the keys. Leave secure coding
out of it.

Combine that with security controls that provide meaningful mis-use
case and fraud detection, instead of attack-vector blocking, and you
and can even allow weak password reset questions. Which is what
the business, and my mother, really wants.

I hesitate to say this, this is like fumbling with flame-bait, but over
the last two years I feel more and more like many in this industry,
including OWASP which you mentioned, are going astray down
this fantasy land of secure-coding and assurance.

The government (and contracting agencies by proxy) are into
assurance. The rest of the world is not.

The private sector is into mitigation, insurance, fraud detection
and incident response.

OWASP notions and directions feel to me like 

Re: [SC-L] quick question - SXSW

2008-03-12 Thread Arian J. Evans
So two thoughts Ben, purely my 0.02 USD:

1. This is largely the wrong crowd. Designers of small web2.0 stuffs,
particularly the domain of widgets and WS interfaces for all the usual
suspect platforms (flickr, facebook etc.) as well as most startups:

They just don't care.

They will never care.

SXSW has * long tail and * design pattern 2007 buzzword
compliant presentations.

You could probably get a snazzy top 5 web2.0 security mistakes
everyone is making or Top 5 Security Design-Patterns in there,
but I don't think it's the right audience. OSCON might be a better
fit, if you praise Ruby and release some open source security project.

2. This security DNA notion -- I don't really buy it. I don't think
there's a big tipping point coming for all hands in for writing secure
software in our near future. Maybe if people start dying because
of insecure software, this will change, but until then ...

I do see increasing awareness is mid to large size organizations
(fortune 2000 +). Developers are more aware and more interested
in security, but mostly in organizations that penalize (fire or
domote) individuals involved in public security blunders.

Overall security is not a feature or a function that you can monetarize.
It's not even cool or sexy. It's an emergent behavior that is only
observed when it is making your software harder to use.

Not until insurance or substantial penalties are the norm (if they are
ever the norm) will we have meaningful quantitative data to drive a
justification for security as a requirement in startup or most open
source software projects. That's my opinion, anyway.

---
Arian J. Evans
Software Security Stuff


On Wed, Mar 12, 2008 at 2:31 PM, Benjamin Tomhave
[EMAIL PROTECTED] wrote:
 First, thanks for that Bill, it exemplifies my point perfectly. A couple
  thoughts...

  one, targeting designers is just as important as reaching out to the
  developers themselves... if the designers can ensure that security
  requirements are incorporated from the outset, then we receive an added
  benefit...

  two, a re-phrasing around my original thought... somehow we need to get
  security thinking and considerations encoded into the DNA of everyone in
  the business, whether they be designers, architects, coders, analysts,
  PMs, sysadmins, etc, etc, etc. Every one of those topics you mention
  could (should!) have had implicit and explicit security attributes
  included... yet we're still at the point where secure coding has to be
  explicitly requested/demanded (often as an afterthought or bolt-on)...

  How do we as infosec professionals get people to the next phase of
  including security thoughts in everything they do... with the end-goal
  being that it is then integrated fully into practices and processes as a
  bona fide genetic mutation that is passed along to future generations?

  To me, this seems to be where infosec is stuck as an industry. There
  seems to be a need for a catalyst to spur the mutation so that it can
  have a life of its own. :)

  fwiw.


  -ben

  --
  Benjamin Tomhave, MS, CISSP
  [EMAIL PROTECTED]
  LI: http://www.linkedin.com/in/btomhave
  Blog: http://www.secureconsulting.net/
  Photos: http://photos.secureconsulting.net/
  Web: http://falcon.secureconsulting.net/

  [ Random Quote: ]
  Augustine's Second Law of Socioscience: For every scientific (or
  engineering) action, there is an equal and opposite social reaction.
  http://globalnerdy.com/2007/07/18/laws-of-software-development/



  William L. Anderson wrote:
   Dear Ben, having just been at SXSW Interactive (I live in Austin, TX) I
   did not see many discussions that pay attention to security, or any
   other software engineering oriented concerns, explicitly.
  
   There was a discussion of scalability for web services that featured the
   developers from digg, Flickr, WordPress, and Media Temple. I got there
   about half-way through but the discussion with the audience was about
   tools and methods to handle high traffic loads. There was a question
   about build and deployment strategies and I asked about unit testing
   (mixed answers - some love it, some think it's strong-arm micro-mgt (go
   figure)).
  
   There was a session on OpenID and OAuth (open authorization) standards
   and implementation. These discussions kind of assume the use of secure
   transports but since I couldn't stay the whole time I don't know if
   secure coding was addressed explicitly.
  
   The main developer attendees at SXSW would call themselves designers and
   I would guess many of them are doing web development in PHP, Ruby, etc.
   I think the majority of attendees would not classify themselves as
   software programmers.
  
   To me it seems very much like at craft culture. That doesn't mean that a
   track on how to develop secure web services wouldn't be popular. In fact
   it might be worth proposing one for next year.
  
   If you want to talk further, please get in touch.
  
   -Bill Anderson

Re: [SC-L] Perspectives on Code Scanning

2007-06-07 Thread Arian J. Evans

inline

On 6/6/07, McGovern, James F (HTSC, IT) [EMAIL PROTECTED]
wrote:


I really hope that this email doesn't generate a ton of offline emails and
hope that folks will talk publicly. It has been my latest thinking that the
value of tools in this space are not really targeted at developers but
should be targeted at executives who care about overall quality and security
folks who care about risk. While developers are the ones to remediate, the
accountability for secure coding resides elsewhere.




Nice email James. These conversations are always enlightening. The responses
tend to illuminate who has what experience types, between (a) university
software experience, (b) government software-project experience, and (c)
enterprise software experience. That makes a lot of difference in these
discussions. Most enterprise /and/ small ISV developers I know, the good
ones, either take pride in their quality of code and self-manage security
issues, or are fast and productive and don't give a crap.

And why should they give a crap? It's not their problem domain.

The armchair software-security pundits: Shame on you. You didn't properly
transcode these Hebrew and Latin code pages to avoid XSS attacks, dummy.

The fast effective developer: I delivered your functional specifications to
the letter, on time, and the transcoding engine is FAST. What's the problem
here?


It would seem to be that tools that developers plug into their IDE should be

free since the value proposition should reside elsewhere. Many of these
tools provide audit functionality and allow enterprises to gain a view
into their portfolio that they previously had zero clue about and this is
where the value should reside.




So of tools that plug into the IDE, let's distinguish between *source-code*
and *run-time* scanners. Source scanners I suspect will die a slow death,
because sooner or later they are going to integrate into the IDE and
per-seat value will plummet. They will be a given feature of IDEs. Sooner
or later the IDE vendors will either buy  integrate, or come out with their
own tools. Take Compuware, the quality is pretty low, but if I were a
betting man I would bet that the bar gets set at low-quality
included-functionality rather than set at $50k per-seat amazing quality
source code analzyer.

I believe this is different from run-time or human design analysis, largely
because of different business case.

For example: I have some clients that really like their Fortify tools, but
they really don't like all the time and critical development resources it
takes to use them, and how expensive they are. Cool tools, sexy technology,
but they are hard to align with the business case and business goals for
software development on multiple levels.

Run-time analysis is different. Run-time scanner IDE-plugins as a concept
is laughable, at best. Seriously -- who thinks that run-time scanners for
developers is a viable idea?

Run-time analysis's strengths are different too. It is easier at run-time
to discover and analyze fundamental design flaws (note: I did not say find
them all, but definitely find indication of them), and to identify
emergent behaviors. At best IDE plugins can perform some form of unit
testing, but beyond verification of basic data-typing and IFOF/IFOE type
issues, meh, not very useful. Not to mentioned entirely outside of the IDE
problem-domain.

Conclusion: two sets of problems, source analysis, and run-time analysis. I
think there is a good-enough bar for source analysis that will get set
fairly low and wind up baked into IDEs... similar to the Viz Studio /GS
switch. I've already seen a pretty effective one that will probably wind up
in one of the next releases of Visual Studio. It's actually better than some
of the commercial offerings today, and baked in.

Spot on James.

Human source analysis for Design Pattern issues, I think, this will always
be needed. Same for run-time analysis. Different problem-domains they solve
for though.


If there is even an iota of agreement, wouldn't it be in the best interest

of folks here to get vendors to ignore developer specific licensing and
instead focus on enterprise concerns?




So some marketing guy one day grokked there are only n-number of security
people per organization that scan run scanners, but there are Multiplier(n *
33.5) number of developers per organizationwow! We could sell all those
developers scanners!!! Ka-Ching.

That sort of thinking is pretty cool, leads to some cool sales growth graphs
and profitability projections. Need board-room meeting material, fits
perfectly into that circular-arrow graph everyone has with TCO and
Lifecycle Management and ROI and how it all loops together and saves
everyone big bags of money after they spend up front.

This circular lifecycle-management graph is labeled Security in the SDLC
and shows how you can buy a lot of developer/IDE-scanners and have even the
cheapest developers scan their code up front, and you'll save big in 

Re: [SC-L] Darkreading: Secure Coding Certification (starting point)

2007-05-15 Thread Arian J. Evans

1. This is a great first step. While it sounds so 2003: I still deal with
developers all the time that simply have no idea what to do or where to
begin for *very basic* issues. Input validation. Output encoding. Or try to
solve by doing crazy wild wrong things (dangerous-string blacklists,
case-changes for case-sensitive language injection (xhtml/js) etc).

2. Most of the world is still not getting the bug parade. 50%. You (Gary)
may see a biased sample of more edjumacated folks by reading SC-L and
working with a client sample that may be in the upper bounds of secure
software knowledge.

3. Focusing on weak implementation practices (bugs) is just fine. That's
what most developers do. Implement.

4. Design and Pattern weaknesses are definitely essential. But that's not
what most developers do.

5. SANS could and should have some separate, additional certifications:

+ Non-dangerous requirements-gathering for Product Evangelists

+ Strong Software Design Principles for Business Owners

+ Strong Software Design Patterns for Software Architects/Lead Developers

+ How to describe mis-use case and dangerous omissions for people writing
functional specifications

Those are all separate pieces of knowledge that, depending on the size of
the organization, may all be separate people.

Certainly most of the developers I've worked with over the years would find
the above in the WTF does this have to do with me? category, and I can't
say I blame them.

And of course SANS makes money. Everything Allen Paller does is really good
about getting lots of free community effort to generate data sets and/or
tools they can charge other folks a lot of money for (CIS, SANS, SSI,
Dshield, etc.).

Sounds pretty smart to me. And I'd sure rather have someone following CIS
guidelines or using SANS course-ware content than *nothing at all*.

Cheers

--
Arian Evans
solipsistic software security sophist

I love deadlines. I like the whooshing sound they make as they fly by. -
Douglas Adams


On 5/15/07, Gary McGraw [EMAIL PROTECTED] wrote:


Hi Yo (and everyone else),

I'm afraid that the current test focuses all of its attention on BUGS (in
C/C++ and Java).  While we certainly need to erradicate simple security
bugs, there is much more to software security than the bug parade.  Plus
when you look into the material, the multiple choice format makes
determining the correct answer impossible at times.

I would rather move away from learning about bugs to learning about
defensive programming to avoid bugs in the first place.  The SANS material
focuses entirely on the negative as far as I can tell.  Here's a bug,
there's a bug, everywhere a bug bug.  Better than nothing?  Maybe.

SANS is very good an soliciting everyone's opinion, piling it all up in a
nice package, and then charging users for the result.  SANS is a for profit
entity, not a university or a non-profit.  Please don't forget that.

As much as I would love to see a way to determine whether a random coder
has security clue, I'm afraid all we will get out of this effort is perhaps
a bit more awareness.

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Johan Peeters
Sent: Saturday, May 12, 2007 6:11 AM
To: SC-L@securecoding.org
Subject: Re: [SC-L] Darkreading: Secure Coding Certification

I agree that multiple choice alone is inadequate to test the true
breadth and depth of someone's security knowledge. Having contributed
a few questions to the SANS pool, I take issue with Gary's article
when it implies that you can pass the GSSP test while clueless.

There is indeed a body of knowledge that is being tested. SANS has
been soliciting comments on the document.

kr,

Yo

On 5/11/07, Gary McGraw [EMAIL PROTECTED] wrote:
 Hi all,

 As readers of the list know, SANS recently announced a certification
scheme for secure programming.  Many vendors and consultants jumped on the
bandwagon.  I'm not so sure the bandwagon is going anywhere.  I explain why
in my latest darkreading column:

 http://www.darkreading.com/document.asp?doc_id=123606

 What do you think?  Can we test someone's software security knowledge
with a multiple choice test?  Anybody seen the body of knowledge behind the
test?

 gem

 company www.cigital.com
 podcast www.cigital.com/silverbullet
 blog www.cigital.com/justiceleague
 book www.swsec.com

 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (
http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___



--
Johan Peeters
http://johanpeeters.com

Re: [SC-L] Catching up, and some retrospective thoughts

2007-04-25 Thread Arian J. Evans

comments:inline

On 4/24/07, Jeremy Epstein [EMAIL PROTECTED] wrote:


I've just caught up with 6 weeks of backlogged messages in this group,



better than me, I fell off all the lists when I moved last year. Pardon list
duplicity:

(1) SOX is a waste, as several people said, because it's just a way to

give auditors more ways to demand irrelevant things on checklists - but
not to pay attention to actual security.  I've had customers demand that



[...] usual non-contextual nonsense audit security requirements removed

So yeah, this happens all the time. I used to work with several software
companies
that store the key with the encrypted message, same host, same DB, all
because
of the requirement to encrypt sensitive data. e.g.-like firewall log
management
products and such. zero value. check.

(2) PCI, by contrast, is dramatically better, because it's got actual

things you can measure, and some of them have some relevance to software
security.  However, it's having an effect that I think was unintended by
the folks who wrote it (or at least the ones I met at a recent
conference) - merchants are pushing the requirements down to all of
their suppliers, regardless of whether they're applicable.



[...]

To look the proverbial gift horse in the mouth, there's another pattern
I've seen from several PCI assessors: they are requiring some form of
software security testing. There seems to be a lot of general confusion
about what webappsec in PCI is today and/or means. (It means nothing
that I know of, outside some random training/awareness req).

The problem is there is absolutely no definition on what this means. WHS
for example has two bitbuckets for simiilar attacks: XSS and Content
Spoofing.
Watchfire added a third, Phishing, which is an overlap of the two above
(their developer didn't want to admit to me his XSS checks were lame,
so made up /random title). Then you have HTTP Response Splitting, which
I think has next to zero attack surface. We stick close to PCI vuln defs
so tend to ignore it, but for some vendors that is a HIGH severity issue.
(!?)

So (a) what is being measured is equivocal, and (b) what is being held
up as priority to be fixed is pretty borked at the moment too.

The really important stuff, like Authentication and Authorization issues,
seem entirely ignored in favor of bit-fiddling like XSS since basic XSS
is generally easier to test for w/out context (e.g.-scanner jocky
--Click/scan).



(3) Vendors do what their customers ask for.  If my customers ask for
better security, we'll put our engineering resources into improving
security - just as Microsoft has done.


[...]

Cynically speaking: has it paid off for MS? Vista? Is security driving
resounding success there? Do we need more time to tell? SQL Server
2005 is nice, but I don't know anyone adopting it because of security.

OTOH: there are folks waving the security banner and getting
a positive response from it from their clients and prospects, I believe
monetary. They come in a couple of flavors:

1. Touting Security whilst doing something about it:

- http://www.discoveryproductions.com/

(apology to all the folks I know I'm leaving out, not sure who all
I am allowed re: NDAs to mention)

2. Touting security, making completely false claims, without actually
implementing or measuring it (there is no price to pay for doing this today,
I mean, hey: what is software security anyway?):

[url removed]
(gives you a nice uber-secure message when you log in,
unfortunately thanks to their litigious nature vulns are neither
disclosed nor fixed)

[url removed]
(similar story, website used to have a picture of a safe on product
page, at least they took that down, but left all the client-side config
parameters in the app)

I chickened out and remove both URLs before sending. Nobody probably
cares about the specific companies, except those companies, who
have gotten testy with me before.

3. People using security verification as a weapon; this is at least
the fifth time I have seen this in my career (direct observation, not
all the implied vuln research battles):

http://forums.aspdotnetstorefront.com/showthread.php?t=6257

I'm going to fire up a blog on all the fun stuff, forensic and like I saw
at FishNet, and now that I have visibility into 500+ web-sites, should
be some useful measurement stats to provide for folks. I don't think
anyone else out there has as many production sites to evaluate at
one time, so ideas on what to mine for data welcome.

If someone wants a measurement bar (e.g.-we are X,Y compared
to like software in our industry for security) this is probably something
to discuss how to provide too. At least, I see some *hows* that are
all crippled by the sensitivity of the information (at least, the perceived
ability to correlate to clients). But worth exploring I think for you
ISVs...

Thanks, cheers,


--
Arian Evans
solipsistic software security sophist

I spend most of my money on motorcycles, martinis, and mistresses. The 

Re: [SC-L] Economics of Software Vulnerabilities

2007-03-21 Thread Arian J. Evans

Spot on thread, Ed:

On 3/20/07, Ed Reed [EMAIL PROTECTED] wrote:


Not all of these are consumer uprisings - some are, some aren't - but I
think they're all examples of the kinds of economic adjustments that occur
in mature markets.

   - Unsafe at any speed (the triumph of consumer safety over
   industrial laziness)


   - Underwriter Laboratories (the triumph of the fire insurance
   industry over shoddy electrical manufacturers)


   - VHS (vs BetaMax - the triumph of content over technology)



This is ironic to me, I wrote a paper for management types, upper tactical
to strategic level view of the software security problem. In current
incarnation it is called Unsafe at Any Speed. Besides a layman's breakdown
of the fundamental issues, (a) implementation issues almost entirely falling
under the inability to enforce data/function boundaries in modern
implementation level languages or platforms, and (b) functional issues which
are design/workflow, or emergent behavior related.

The important point I stress is that there really hasn't been a
Whistle-Blower Phase in the software industry concerning security. Today,
vague arguments about plane crashes aside, there is little to no hard
evidence tying software defects with security implications to loss of human
life. And that's the kicker: dollars to DNA, it's death that sells.

I also argue that we are killing the Canaries in the Coal Mine. The script
kiddies, the guys writing the little payload-less worms, the kid who wrote
the Sammy virus, they are scared to touch systems now. These were the
Canaries down there in our software coal mines. SQL Slammer, Witty worm,
though no payload, caused negative impact, but there were no charges for
these.

The charges are always some token young guy for some relatively benign worm.
MySpace slows down and we prosecute a young kid with above-average problem
solving skills. I used to call these worms that slowed things down free pen
tests, later canaries. They had a real (positive) value to us, and we've
killed that value without replacing it with something better.

I experienced a rising of vendor animosity and threats in the two years, a
reversing of trend back to the good old days, coupled with work
constraints restricting full disclosure options. What made this worse (to
me, ethically) is that many of these vendors were advertising security to
their clients, from an image of a Safe on the website with a list of
security features, to announcements proclaiming the security of the system
displayed to users after they log in. None of these systems were measurably
security in any fashion I could detect, not even to usual suspects (SQLi,
XSS, Insufficient Authorization/Workflow bypass, etc. etc.). I got the
feeling things were getting worse. That or I hit some weird biased sample of
ISVs.

I think you are on to something here in how to think about this subject.
Perhaps I should float my little paper out there and we could shape up
something worth while describing how the industry is evolving today.

I have been peacefully quiet since I quit my old job, ignoring the security
lists and industry and haven't poked the bear err trolled any of the usual
suspects lately. Looks like I've been missing out on some good dialogue,
thank you, this was very helpful,

Arian J. Evans
Solipsistic Software Security Sophist at Large
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


RE: [SC-L] ddj: beyond the badnessometer

2006-07-14 Thread Arian J. Evans
Great stuff Nash. To re-iterate one important statement: Many orgs
today will *only* respond to a working exploit. (I'm not sure what
the sample (%clue) of orgs I see is vs. Cigital's client, but...)

Pen-test vs. code review, black-box, white-box, whatever:

There is absolutely no difference at the end of the day in terms of
*verification* between finding SQL injection or susceptibility to
XSS attacks via black box attacks or fully white human eyeball
source code review.

Commission, Omission, transmission, no difference. It's verification.

The pen test *can* make an acceptable 'punch card'. Depends on how
the punch card results are written:

+ Does the punch card simply list the top 10 XSS'able parameters?

+ Or does it provide a finding covering both/either:

- design Omission

*or*

- implementation Commission/failure to

encode output appropriately for the User Agent(s) specified?

The rabbit hole can, of course, go far more usefully deeper in
terms of problem resolution with source code review. I think
that's the worst sentence I've ever written. Today, maybe for
the last = 1.5 years, I get but what library should I use
to accomplish this with [insert_framework]? instead of uh,
what's output encoding? or input validation is too hard/slow.

Assertion Falsification can be a tail-chaser. That's why you
add in business context. I think you'll find that many good pen
testers sit down with the business and help them define security
goals for the application, e.g.- Rob must never get Sally's report
or Sally's report must remain sacred at all points, in transit,
storage, access, etc.

Commonly this is achieved through threat modeling, which turns
pen testing into a verification mechanism.

Ultimately the folks on this list are pretty smart and I'd wonder
why/if this dialogue is needed, except that a recent discussion
opened my eyes a bit to approaches I thought were doornail dead.

A fairly large consultancy with a practice focused on application
security contacted me earlier this year and in the course of
discussing their approach to appsec I asked, but when do you
talk to the business and when do you work with the developers?
and their response was What?

After repeating the question I got told Oh, no, you won't
talk to those folks or get to see their documentation; we're
security professionals, not developers.

So evidently there is still a market for high-dollar, completely
blind pen tests of apps with zero business context and no
understanding of architecture, dev/design goals, etc.

Hmmm.

That's what I'm guessing Gary means, and surely that sun is
slowly setting.

-ae

p.s. - Nash, when I first read your post, I thought p2 started
with Pen tests are highly addictive. Then I re-read.
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Nash
 Sent: Thursday, July 13, 2006 9:18 AM
 To: Gary McGraw
 Cc: Secure Coding Mailing List
 Subject: Re: [SC-L] ddj: beyond the badnessometer
 
 On Thu, Jul 13, 2006 at 07:56:16AM -0400, Gary McGraw wrote:
  
  Is penetration testing good or bad?
  http://ddj.com/dept/security/18951
 
 
 Test coverage is an issue that penetration testers have to deal with,
 without a doubt. Pen-tests can never test every possible attack
 vector, which means that pen-tests can not always falsify a security
 assertion.
 
 Ok. But... 
 
 First, pen-testers are highly active. The really good ones spend alot
 of time in the hacker community keeping up with the latest attack
 types, methods, and tools. Hence, the relevance of the test coverage
 you get from a skilled pen-tester is actually quite good. In addition,
 the tests run are similar to real attacks you're likely to see in the
 wild. Also, pen-testsing is often intelligent, focused, and highly
 motivated. After all, how would you like to have to go back to your
 customer with a blank report? And, the recommendations you get can be
 quite good because pen-testers tend to think about the entire
 deployment environment, instead of just the code. So, they can help
 you use technologies you already have to fix problems instead of
 having to write lots and lots of new code to fix them. All of these
 make pen-testing a valuable exercise for software environments to go
 through.
 
 Second, every software application in deployment has an associated
 level of acceptable risk. In many cases, the level of acceptable risk
 is low enough that penetration testing provides all the verificaton
 capabilities needed. In some cases, the level of acceptable risk is
 really low and even pen-testing is overkill. I do mostly code review
 work these days, but I find that pen-testing has more general
 applicability to my customers. There are exceptions, but not that
 many.
 
 Third, pen-tests also have real business advantages that don't
 directly address risk mitigation. Pen-test reports are typically more
 down to earth. That is, they can be read more easily and the attacks
 can be demonstrated more easily to 

RE: [SC-L] Why Software Will Continue to Be Vulnerable

2005-05-01 Thread Arian J. Evans

 -Original Message-
 From: [EMAIL PROTECTED] 
 Sent: Friday, April 29, 2005 2:32 PM
 To: SC-L
 Subject: [SC-L] Why Software Will Continue to Be Vulnerable

 This makes it highly unlikely that software companies are 
 about to start dumping large quantities of $$ into improving software quality.
 

That's interesting. And yet it's even worse than that. Software security
for the most part is not yet a *business* problem. Most businesses
(at least, that I deal with) still see software security as a feature problem
(ie.-we'll add it in version 1.1), an operational problem (e.g.-network 
security),
or a process problem (e.g.--log review or some such nonsense that they
don't likely do anyway). Even worse, security folks that don't understand
the problem make the issue political as they try advance their careers by
solving the problem with lots of security appliance widgets and scanners
and such (which they don't understand either).

So you have (1) lack of public perception that there is an issue, (2) lack
of business perception that it's their issue, and (3) Information Security
Managers/CISOs trying to solve a business problem with more technology.

But all is not lost. There are still drivers:

1. Regulations. SB 1386 is starting to make a large impact in
business perceptions.

2. Standards  Certifications: albeit there is really an utter lack
of Standards/Certs for software security, business are starting
to look for these; several I'm dealing with are looking for these
as selling features.

e.g.--Our widget is more security that Competitor Y's widget
because it is certified secure software.

3. Real world compromises. Take something as simple as XSS. How
do you take is seriously when NO ONE is exploiting it? (I know of only
a small handful of cases between 2000 to 2003.) But that all changed
in 2004, particularly December 2004 when there were a string of
advanced XSS attacks against financial institutions.

(While there are some cool examples from 2004 that I use a lot in
presentations none I repeat none have any meaningful loss numbers
associated with them that I am aware of.)


-ae