Re: [SC-L] BSIMM3 lives

2011-10-15 Thread Steven M. Christey


Gary,

Congratulations to you, Brian, Sammy, and the rest of the BSIMM3 
community!


I have a few questions:

1) Was any analysis done to ensure that the 3 levels are consistent
   from a maturity perspective - for example, if an organization
   performed an activity at level 2, that there was a high chance that
   it also performed many of the level-1 activities?  For example,
   many T2.x activities were done by more organizations than their
   counterpart T1.x activities, and there's a similar pattern with
   some SR2.x versus SR1.x.

2) Any thoughts on why the financial services vertical scored
   noticeably lower than ISVs on Code Review, Architectural Analysis,
   etc.?  Maybe ISVs have a better infrastructure for launching
   these activities because code development is a core aspect of
   their business?

3) The wording about OWASP ESAPI in SFD2.1 is unclear: Generic open
   source software security architectures including OWASP ESAPI should
   not be considered secure out of the box.  Does Struts, mentioned
   earlier in the paragraph, also fall under the category of not
   secure out of the box?  Are you saying that developers must be
   careful in adopting security middleware?


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] informIT: Building versus Breaking

2011-09-01 Thread Steven M. Christey


While I'd like to see Black Hat add some more defensive-minded tracks, I 
just realized that this desire might a symptom of a larger problem: there 
aren't really any large-scale conferences dedicated to defense / software 
assurance.  (The OWASP conferences are heavily web-focused; Dept. of 
Homeland Security has its software assurance forum and working groups, but 
those are relatively small.)


If somebody built it, would anybody come?

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] InformIT: comparing static analysis tools

2011-02-04 Thread Steven M. Christey


Jim,

Maybe you would have had more success if you explicitly said in the 
cloud ;-)


- Steve


On Thu, 3 Feb 2011, Jim Manico wrote:


Chris,

I've tried to leverage Veracode in recent engagements. Here is how the 
conversation went:


Jim: Boss, can I upload all of your code to this cool SaaS service for 
analysis?

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Food for thought on app sec

2011-01-25 Thread Steven M. Christey


Rohit,

Excellent article!  For the Top 25, we've had lots of people assume that 
the entire list is about domain-specific issues, when it also covers 
domain-agnostic issues as well.  My first guess is that domain-specific 
has a loose association with implementation, and domain-agnostic has a 
loose association with design.  Better modeling the differences between 
domain-agnostic and domain-specific might also partially explain the 
false-positive rates in automated code scanners [1] and why scanners seem 
to be very limited for domain-specific issues without sufficient tuning.


There may be some subtleties with how to classify things like XSS, which 
is arguably both domain-agnostic and domain-independent; a CMS admin often 
has the privileges to insert script, thus no XSS (which requires a 
domain-specific assessment).  You might also have requirements that are 
specific to a particular product; for example, path traversal might be 
fine for an application if it's provided as a command-line argument for 
startup, but not when reading a pathname from remote input.  This suggests 
an application-layer notion of domain-specific.


We do not have this type of distinction for weaknesses in CWE, though it 
may be useful for some consumers (SC-L readers can contact cwe@mitre if 
you think this would be useful, and why).


- Steve


[1] See NIST's SATE project for why false positive and true positive 
are not fine-grained enough for classifying the correctness/utility of 
automated scanning findings.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] [WEB SECURITY] Backdoors in custom software applications

2010-12-23 Thread Steven M. Christey


On Mon, 20 Dec 2010, Arian J. Evans wrote:


On a day to day basis - here are the most common backdoors in
webapps I've encountered over the last 15 years or so:

1) Developer Tools Backdoor hidden under obscure path
2) COTS module improperly deployed results in backdoor
3) Custom admin module, Auth gets changed/removed, results in same as #2
4) MVC framework autobinding exposes functions not intended to be
exposed resulting in backdoor

Most of these backdoors are accidental ignorance or mistakes.


Note that these backdoors Arian listed can be classified in more general 
weakness/vulnerability terms (as he stated, e.g. 
authentication/authorization).  This is both a good thing, since existing 
detection techniques for regular vulns may still find these, and a bad 
thing - you can't automatically determine if intent is malicious or not.


Just thought I'd mention this since a lot of people seem to think of 
backdoors and other maliciously-inserted code as being somehow different 
than regular vulns.  The main difference is intent, which you can't always 
know.  For example, many vendor-introduced hidden accounts are introduced 
to make installation or support easier, or as a result of a testing 
feature that wasn't disabled before shipment.


You could argue that some backdoors/malicious-functionality are business 
logic but I suspect that most business-logic issues are really just 
instances of more general weaknesses/vulns that require knowledge of a 
specific domain to determine whether they're expected behavior or bad 
behavior.


- Steve




can turn malicious, but in the majority of cases I have seen, they
were not intended to be malicious. I have only seen deep-evil,
malicious backdooring a couple of times.

In devilish detail:

1) Back Door hidden under obscure path

/app/bin/steve/stevessecretphrase/tools/ (stuff under here is
dangerous and/or bad)

I see this happen over and over again for a variety of reasons. As
mentioned - most were not intentionally malicious - at least before
the developer who made the backdoor was fired. Usually the original
motivation was to instrument some part of the code for runtime
analysis, or provide a test/debugging interface for the developer.

Automated static analysis is by and large useless out of the box at
detecting these. These are valid web applications from an
implementation level perspective. Blind automated blackbox is fairly
limited at finding these toothese type of backdoors aren't linked
from the main app. They are rarely in the large, fairly useless
dictionary of directory names the scanners run brute force checks for.

How do you find these?

I wrote a tripwire-like tool for my webapps that tracked and diffed
files and paths for (a) changes and (b) cross mapped to request paths
in the WWW logs. In the early days I knew all the paths of all the web
apps we wrote, so this would allow me to identify new and unusual
paths as they showed up on the file system, or in request logs. As we
grew that quickly failed to scale. When your web apps grow into the
dozens, and hundredsI think a WAF is your only hope here.

Modern web apps don't lend themselves to file-level
path/file/directory audits (using automation). Source code scanning
lacks the context needed here, plus these things are very often
config-file dependent and config files in prod are always different
than the CBE/SIT code being scanned, and in prod they are rarely
audited properly.

The good news is that the bad guys have just as much trouble finding
these as you do. The exception being an ex-employee who has insider
knowledge (usually wrote the thing). But while dangerous, this appears
to be one of the least-common sources of compromise.



2) COTS module improperly deployed or configured for AuthC/Z resulting
in Back Door

Example: some Peoplesoft or SAP module with employee PII or amazing
administrative powers accidentally gets:

(a) deployed with insufficient (or missing) AuthC/Z, usually as part
of some grand web SSO scheme that turned messy.

(b) deployed to an unintended production server region that does not
have the same controls as the intended region. e.g.- IT is counting on
using Windows Integrated Authentication over HTTP on the Intranet for
Auth on this webapp. However, someone deployed parts of it to the
Internet-facing/DMZ webservers. Now you can access it with no
authentication at all. Or using Basic Auth and a default vendor or
admin/admin type account easily accessible over the internet.

At the rate I see #2 increasing, it may replace #1 soon.



3) Homegrown admin tools deployed with insufficient AuthC/Z

Same as #2, but harder to find with automation. It's easy to cook up
some tests for things everyone knows about, e.g.
/peoplesoft/admintools/ in static and dynamic analysis.

It is less easy to look for things you do not know exist.
/sebastians/homebrew/admintools

I see two backdoor situations here, one where /admintools/
accidentally has Auth removed for 

[SC-L] DHS Cyber Security BAA announcements related to software assurance

2010-11-11 Thread Steven M. Christey


FYI - heard about this from Russell Thomas on another list.  The US 
Department of Homeland Security will be publishing a Broad Agency 
Announcement (BAA) related to software assurance; an Industry Day session 
will take place on November 17, with a registration deadline of November 
12.


Technical Topic Areas of interest to SC-L readers include software 
assurance, improved measurement and research for code analysis techniques, 
metrics related to how secure is this product, survivability, and a 
number of other areas.


https://www.fbo.gov/index?s=opportunitymode=formid=3459d2180c7625e61fff3e2764b7f78dtab=core_cview=0

More details on the TTAs are from:

https://www.fbo.gov/utils/view?id=bea326bcd2453cc43f8d4c2beb150964


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Java: the next platform-independent target

2010-10-24 Thread Steven M. Christey


On Fri, 22 Oct 2010, Jim Manico wrote:

I think the deprecation of these technologies for an enterprise is a 
wise idea. :) How can a large enterprise use PHP or ASP for 
security-critical applications with a straight face? Let's move forward 
to Ruby on Rails, Enterprise Java, .NET and other modern frameworks that 
are more mature from a security centric POV.


Just a minor, slightly-tangential-yet-not point, the Ruby / Ruby on Rails 
products have approximately 10 CVE vulns since the beginning of 2009. 
Not a lot but still something for consideration in application deployment. 
And you know I support ESAPI but it's had its own issues, too (and I 
highly doubt I could do a better job security-wise).  Software is software 
and therefore will have vulns, whether its purpose is for a protection 
mechanism or for core functionality.  We will never get away from 
interpreters or frameworks from having their own vulns, although if they 
make things easier security-wise, that's probably a much bigger payoff.


I'm making a generic point here.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Java: the next platform-independent target

2010-10-21 Thread Steven M. Christey


On Thu, 21 Oct 2010, James Manico wrote:


A lot of smart people disagree with me here - but the history of Java
sandbox problems, data theft though reflection, the weak security policy
mechanism, etc, backs up my recommendation.


Given the history of security problems in the PHP interpreter itself, and 
the occasional issues in Perl, and don't forget some of the tidbits in 
ASP.Net, maybe all those should be tossed out as well, and we should all 
move back to C. ;-)


Compilers/interpreters are software, too, and so are going to be subject 
to vulnerabilities.


(Not that I'm disagreeing with strategies that reduce attack surface, such 
as disabling client-side functionality.)


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] [WEB SECURITY] RE: blog post and open source vulnerabilities to blog about

2010-03-18 Thread Steven M. Christey


CWE, CLASP, and some other information sources have a number of code 
snippets that highlight various weaknesses.  In CWE, this code is easily 
extractable from the XML by grabbing the Demonstrative_Examples element, 
and we've even conveniently labeled examples with the various languages. 
You could also grab the CVE real-world examples from the Observed_Examples 
element.


Note that the code examples are by no means complete, but they might be 
good enough to start with.  If you pore through CVE, you will soon realize 
that it can be very time-consuming to go from a real-world open-source 
vuln report to the actual code snippet.


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Metrics

2010-02-05 Thread Steven M. Christey


On Fri, 5 Feb 2010, McGovern, James F. (eBusiness) wrote:

One of the general patterns I noted while providing feedback to the 
OWASP Top Ten listserv is that top ten lists do sort differently. Within 
an enterprise setting, it is typical for enterprise applications to be 
built on Java, .NET or other compiled languages where as if I were doing 
an Internet startup I may leverage more scripting approaches. So, if 
different demographics have different behaviors what would a converged 
list or even a separate list tell us?


A converged list is useful for general recommendations to people who 
haven't made their own custom lists.  The 2010 Top 25, due to be released 
Feb 16, also considers alternate Focus Profiles with different 
prioritizations to serve different use cases and get people thinking about 
how to do their own prioritization.


The general list, meanwhile, captures what patterns may exist across all 
participants - i.e., what everyone is most worried about.


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Steven M. Christey


On Wed, 3 Feb 2010, Gary McGraw wrote:

Popularity contests are not the kind of data we should count on.  But 
maybe we'll make some progress on that one day.


That's my hope, too, but I'm comfortable with making baby steps along the 
way.



Ultimately, I would love to see the kind of linkage between the collected
data (evidence) and some larger goal (higher security whatever THAT
means in quantitative terms) but if it's out there, I don't see it


Neither do I, and that is a serious issue with models like the BSIMM 
that measure second order effects like activities.  Do the activities 
actually do any good?  Important question!


And one we can't answer without more data that comes from the developers 
who adopt any particular practice, and without some independent measure of 
what success means.  For example: I am a big fan of the attack surface 
metric originally proposed by Michael Howard and taken up by Jeanette Wing 
et al. at CMU (still need to find the time to read Manadhata's thesis, 
alas...)  It seems like common sense that if you reduce attack surface, 
you reduce the number of security problems, but how do you KNOW!?



The 2010 OWASP Top 10 RC1 is more data-driven than previous versions; same
with the 2010 Top 25 (whose release has been delayed to Feb 16, btw).
Unlike last year's Top 25 effort, this time I received several sources of
raw prevalence data, but unfortunately it wasn't in sufficiently
consumable form to combine.


I was with you up until that last part.  Combining the prevalence data 
is something you guys should definitely do.  BTW, how is the 2010 CWE-25 
(which doesn't yet exist) more data driven??


I guess you could call it a more refined version of the popularity 
contest that you already referred to (with the associated limitations, 
and thus subject to some of the same criticisms as those pointed at 
BSIMM): we effectively conducted a survey of a diverse set of 
organizations/individuals from various parts of the software security 
industry, asking what was most important to them, and what they saw the 
most often.  This year, I intentionally designed the Top 25 under the 
assumption that we would not have hard-core quantitative data, recognizing 
that people WANTED hard-core data, and that the few people who actually 
had this data, would not want to share it.  (After all, as a software 
vendor you may know what your own problems are, but you might not want to 
share that with anyone else.)


It was a bit of a surprise when a handful of participants actually had 
real data - but, then the problem I'm referring to with respect to 
consumable form reared its ugly head.  One third-party consultant had 
statistics for a broad set of about 10 high-level categories representing 
hundreds of evaluations; one software vendor gave us a specific weakness 
history - representing dozens of different CWE entries across a broad 
spectrum of issues, sometimes at very low levels of detail and even 
branching into the GUI part of CWE which almost nobody pays attention to - 
but only for 3 products.  Another vendor rep evaluated the dozen or two 
publicly-disclosed vulnerabilities that were most severe according to 
associated CVSS scores.  Those three data sets, plus the handful of others 
based on some form of analysis of hard-core data, are not merge-able. 
The irony with CWE (and many of the making-security-measurable efforts) is 
that it brings sufficient clarity to recognize when there is no clarity... 
the known unknowns to quote Donald Rumsfeld.  I saw this in 1999 in the 
early days of CVE, too, and it's still going on - observers of the 
oss-security list see this weekly.


For data collection at such a specialized level, the situation is not 
unlike the breach-data problem faced by the Open Security Foundation in 
their Data Loss DB work - sometimes you have details, sometimes you don't. 
The Data Loss people might be able to say well, based on this 100-page 
report we examined, we think it MIGHT have been SQL injection but that's 
the kind of data we're dealing with right now.


Now, a separate exercise in which we compare/contrast the customized top-n 
lists of those who have actually progressed to the point of making them... 
that smells like opportunity to me.



I for one am pretty satisfied with the rate at which things are
progressing and am delighted to see that we're finally getting some raw
data, as good (or as bad) as it may be.  The data collection process,
source data, metrics, and conclusions associated with the 2010 Top 25 will
probably be controversial, but at least there's some data to argue about.


Cool!


To clarify to others who have commented on this part - I'm talking 
specifically about the rate in which the software security industry seems 
to be maturing, independently of how quickly the threat landscape is 
changing.  That's a whole different, depressing problem.


- Steve
___
Secure Coding mailing list 

Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Steven M. Christey


On Thu, 4 Feb 2010, Jim Manico wrote:

These companies are examples of recent epic security failure. Probably 
the most financially damaging infosec attack, ever. Microsoft let a 
plain-vanilla 0-day slip through ie6 for years


Actually, it was a not-so-vanilla use-after-free, which once upon a time 
was only thought of as a reliability problem, but lately, exploit and 
detection techniques have recently begun bearing fruit for the small 
number of people who actually know how to get code execution out of these 
bugs.  In general, Microsoft (and others) have gotten their software to 
the point where attackers and researchers have to spend a lot of time and 
$$$ to find obscure vuln types, then spend some more time and $$$ to work 
around the various protection mechanisms that exist in order to get code 
execution instead of a crash.


I can't remember the last time I saw a Microsoft product have a 
mind-numbingly-obvious problem in it.  It would be nice if statistics were 
available that measured how many person-hours and CPU-hours were used to 
find new vulnerabilities - then you could determine the ratio of 
level-of-effort to number-of-vulns-found.  That data's not available, 
though - we only have anecdotal evidence by people such as Dave Aitel and 
David Litchfield saying it's getting more difficult and time-consuming.


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-02 Thread Steven M. Christey


On Tue, 2 Feb 2010, Wall, Kevin wrote:


To study something scientifically goes _beyond_ simply gathering
observable and measurable evidence. Not only does data needs to be
collected, but it also needs to be tested against a hypotheses that offers
a tentative *explanation* of the observed phenomena;
i.e., the hypotheses should offer some predictive value. Furthermore,
the steps of the experiment must be _repeatable_, not just by
those currently involved in the attempted scientific endeavor, but by
*anyone* who would care to repeat the experiment. If the
steps are not repeatable, then any predictive value of the study is lost.


I believe that the cross-industry efforts like BSIMM, ESAPI, top-n lists, 
SAMATE, etc. are largely at the beginning of the data collection phase. 
It shouldn't be much of a surprise that the many companies participate in 
two or more of these efforts (although simultaneously disconcerting, but 
that's probably what happens in brand-new areas).


Ultimately, I would love to see the kind of linkage between the collected 
data (evidence) and some larger goal (higher security whatever THAT 
means in quantitative terms) but if it's out there, I don't see it, or 
it's in tiny pieces... and it may be a few years before we get to that 
point.  CVE data and trends have been used in recent years, or should I 
say abused or misused, because of inherent bias problems that I'm too lazy 
to talk about at the moment.


In CWE, one aspect of our research is to tie attacks to weaknesses, 
weaknesses to mitigations, etc. so that there is better understanding of 
all the inter-related pieces.  So when you look at the CERT C coding 
standard and its ties back to CWE, you see which rules directly 
reduce/affect which weaknesses, and which ones don't.  (Or, you *could*, 
if you wanted to look at it closely enough).


The 2010 OWASP Top 10 RC1 is more data-driven than previous versions; same 
with the 2010 Top 25 (whose release has been delayed to Feb 16, btw). 
Unlike last year's Top 25 effort, this time I received several sources of 
raw prevalence data, but unfortunately it wasn't in sufficiently 
consumable form to combine.


In tool analysis efforts such as SAMATE, we are still wrestling with the 
notion of what a false positive really means, not to mention the 
challenge of analyzing mountains of raw data, using tools that were 
intended for developers in a third-party consulting context, combined with 
the multitude of perspectives in how weaknesses are described (e.g., what 
do you do if there's a chain from weakness X to Y, and tool 1 reports X, 
and tool 2 reports Y?)


In fact, I am willing to bet that the different members of my 
Application Security team who have all worked together for about 8 years 
would answer a significant number of the BSIMM Begin survey questions 
quite differently.


Even surveys using much lower-level detailed questions - such as which 
weaknesses on a nominee list of 41 are the most important and prevalent 
- have had distinct responses from multiple people within the same 
organization. (I'll touch on this a little more when the 2010 Top 25 is 
released).  Arguably many of these differences in opinion come down to 
variations in context and experience, but unless and until we can model 
context in a way that makes our results somewhat shareable, we can't get 
beyond the data collection phase.


I for one am pretty satisfied with the rate at which things are 
progressing and am delighted to see that we're finally getting some raw 
data, as good (or as bad) as it may be.  The data collection process, 
source data, metrics, and conclusions associated with the 2010 Top 25 will 
probably be controversial, but at least there's some data to argue about. 
So in that sense, I see Gary's article not so much as a clarion call for 
action to a reluctant and primitive industry, but an early announcement of 
a shift that is already underway.


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-02-02 Thread Steven M. Christey


On Tue, 2 Feb 2010, Arian J. Evans wrote:


BSIMM is probably useful for government agencies, or some large
organizations. But the vast majority of clients I work with don't have
the time or need or ability to take advantage of BSIMM. Nor should
they. They don't need a software security group.


I'm looking forward to what BSIMM Basic discovers when talking to small 
and mid-size developers.  Many of the questions in the survey PDF assume 
that the respondent has at least thought of addressing software security, 
but not all questions assume the presence of an SSG, and there are even 
questions about the use of general top-n lists vs. customized top-n lists 
that may be informative.


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM update (informIT)

2010-01-29 Thread Steven M. Christey


Speaking of top 25 tea leaves, the bug parade boogeyman just called 
and reminded me that the 2010 Top 25 is due to be released next Thursday, 
February 4.  Thanks for the plug.


A preview of some of the brand-new features:

1) Data-driven ranking with alternate metrics to feed the brain and
   stimulate wider discussion - featuring special guest star Elizabeth
   Nichols

2) Multiple focus profiles to avoid one-size-fits-all

3) Cross-cutting mitigations that expand far beyond the Top 25 - AND show
   which mitigations address which Top 25's

4) References to resources such as BSIMM (and even that controversial
   bad-boy ESAPI) to get people thinking even more about systematic
   software security

... and a few more tidbits.

This particular Cargo-Culting pseudoscientist has dutifully listened to 
his fellow islanders.  This year we've made shiny new airstrips and 
control towers, and apparently we've already started some fires.  The 
planes will TOTALLY come back!  Or maybe I'm just feeling a little 
whimsical.


- Steve

P.S.  I can't wait until software security becomes an actual science, 
because as we all know, scientists are much too rational to ever indulge 
in self-destructive infighting and name-calling that hinders opportunities 
for progress in their field.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] 2010 bug hits millions of Germans | World news | The Guardian

2010-01-07 Thread Steven M. Christey


On Thu, 7 Jan 2010, Stephen Craig Evans wrote:


I am VERY curious to learn how these happened...


My name is Steve.  I had a 2010 problem.

An internal CVE support program was hit by this issue.  Fortunately, 
there weren't any fatal results and it was only an annoyance.  However: I 
had an input validation routine that did a sanity-check on dates, which I 
wrote sometime around 2005.  The check would generate a specific complaint 
if a date was 2010 or later since, after all, it was 2005 - a time when 
resources for development were extremely low - and it worked back then. 
(Now I'm starting to rationalize that all my bad practices back then were 
Agile instead of cheap hacks.  Yes, that was deliberately 
inflammatory.)


The regexp to check the year was something like /^(199\d|200\d)$/, and the 
informative error message would say that the year portion of the date 
appeared to be invalid.  There was a separate check that also made sure 
that a given date wasn't in the future, so this message was basically a 
secondary bit of detail.


Anyway, 5 years passed and I forgot about the limitation of that routine 
until it started generating informational error messages when CVE team 
members submitted new CVE content.


One could say that this was under the radar of my threat model when it 
should have been part of the threat model for these major vendors, but it 
was still a known bug/feature that never got fixed until it had to be 
fixed.


I'm sure I have a few other date-sensitive dependencies that are not a 
high priority to fix, given current conditions and practices. I'll 
probably be close to retirement age come 2038 when the Unix year bug shows 
up.  If CVE is still around then, and my code is still being used, well, 
it's gonna be someone else's problem.


Anybody else willing to admit their 2010 mistakes and the conditions that 
led to them?  Or was it just me and a couple huge companies?


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Provably correct microkernel (seL4)

2009-10-03 Thread Steven M. Christey

I wonder what would happen if somebody offered $1 to the first applied
researcher to find a fault or security error.  According to
http://ertos.nicta.com.au/research/l4.verified/proof.pml, buffer
overflows, memory leaks, and other issues are not present.  Maybe people
would give up if they don't gain some quick results, but it seems like
you'd want to sanity-check the claims using alternate techniques.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Seeking vulnerable server-side scripts

2009-05-06 Thread Steven M. Christey

Jeremy,

CVE is littered with these kinds of issues, for PHP especially.  The
scripts are often open source, fully-functional packages that just happen
to have lots of security issues.  Sometimes the root cause is buried
fairly deep in the code, but the people who find these bugs often care
only about the attack.  The CVE descriptions are often straightforward.

To find the best options, I'd grab CVEs that mention scripts ending in a
.php extension, select the ones with both milw0rm and Secunia references,
then examine the milw0rm reference to see if the researcher lists a
download URL for the product (this is probably 25% or more of all
milw0rms, so you won't have to look very hard).  While you'll get a lot of
XSS, SQL injection, and file inclusion, you'll also get more subtle issues
like eval injection, file upload, redirect-without-exit, static code
injection, variable extraction, and other issues that affect most
interpreted languages (although the vuln research emphasis is on PHP).
Since CVE descriptions are well-formed for well-known vuln types, you
could find the weird ones pretty quickly.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist (informIT)

2009-03-18 Thread Steven M. Christey

On Wed, 18 Mar 2009, Gary McGraw wrote:

 Many of the top N lists we encountered were developed through the
 consistent use of static analysis tools.

Interesting.  Does this mean that their top N lists are less likely to
include design flaws?  (though they would be covered under various other
BSIMM activities).

 After looking at millions of lines of code (sometimes constantly), a
 ***real*** top N list of bugs emerges for an organization.  Eradicating
 number one is an obvious priority.  Training can help.  New number
 one...lather, rinse, repeat.

I believe this is reflected in public CVE data.  Take a look at the bugs
that are being reported for, say, Microsoft or major Linux vendors or most
any product with a long history, and their current number 1's are not the
same as the number 1's of the past.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist (informIT)

2009-03-18 Thread Steven M. Christey

On Wed, 18 Mar 2009, Gary McGraw wrote:

 Because it is about building a top N list FOR A PARTICULAR ORGANIZATION.
 You and I have discussed this many times.  The generic top 25 is
 unlikely to apply to any particular organization.  The notion of using
 that as a driver for software purchasing is insane.  On the other hand
 if organization X knows what THEIR top 10 bugs are, that has real value.

Got it, thanks.  I guessed as much.  Did you investigate whether the
developers' personal top-N lists were consistent with what their customers
cared about?  How did the developers go about selecting them?

By the way, last week in my OWASP Software Assurance Day talk on the Top
25, I had a slide on the role of top-N lists in BSIMM, where I attempted
to say basically the same thing.  This was after various slides that tried
to emphasize how the current Top 25 is both incomplete and not necessarily
fully relevant to a particular organization's needs.  So while the message
may have been diluted during initial publication, it's being refined
somewhat.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist (informIT)

2009-03-18 Thread Steven M. Christey

On Wed, 18 Mar 2009, Gary McGraw wrote:

 Both early phases of software security made use of any sort of argument
 or 'evidence' to bolster the software security message, and that was
 fine given the starting point. We had lots of examples, plenty of good
 intuition, and the best of intentions. But now the time has come to put
 away the bug parade boogeyman, the top 25 tea leaves, black box web app
 goat sacrifice, and the occult reading of pen testing entrails. The time
 for science is upon us.

Given your critique of Top-N lists and bug parades in this paragraph and
elsewhere, why is a top N bugs list explicitly identified in BSIMM
CR1.1, and partially applicable in places like T1.1, T2.1, SFD2.1, SR1.4,
and CR2.1?

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] SDL / Secure Coding and impact on CWE / Top 25

2009-01-28 Thread Steven M. Christey

In the past year or so, I've been of a growing mindset that one of the
hidden powers of CWE and other weakness/bug/vulnerability/attack
taxonomies would be in evaluating secure coding practices: if you do X and
Y, then what does that actually buy you, in terms of which vulnerabilities
are fixed or mitigated?  We capture some of that in CWE with CAPEC
mappings for attacks.

We've also mapped to the CERT C Secure Coding standard, as reflected in
this CWE view: http://cwe.mitre.org/data/graphs/734.html (for the
complete/detailed listing, click the Slice button on the upper right and
sift through the Taxonomy Mappings).  Or, check out the coverage graphs
that show where the coding standard fits within the two main CWE
hierarchical views: http://cwe.mitre.org/data/pdfs.html

Now Microsoft has released a paper that shows how their SDL practices
address the Top 25, like they did when the OWASP Top Ten came out.  To me,
this seems like a productive practice and a potential boon to consumers,
*if* other vendors adopt similar practices.  Are there ways that the
software security community can further encourage this type of thing from
vendors?  Should we?

Gary, do your worst ;-)

http://blogs.msdn.com/sdl/archive/2009/01/27/sdl-and-the-cwe-sans-top-25.aspx

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Some Interesting Topics arising from the SANS/CWE Top 25

2009-01-14 Thread Steven M. Christey

On Tue, 13 Jan 2009, Greg Beeley wrote:

 Steve I agree with you on this one.  Both input validation and output
 encoding are countermeasures to the same basic problem -- that some of
 the parts of your string of data may get treated as control structures
 instead of just as data.

Note that I'm only talking about this in light of injection-related
issues.

Input validation is an important countermeasure for buffer overflows, for
example, whereas output encoding isn't.  (Unless you want to take the
approach that things like strncpy() or safe string libraries are really
related to controlling output when you process strings from an input
buffer to an output buffer, and shellcode is a means of injection...)


  For the purpose of this email I'm using a definition of input
 validation as sanitizing/restricting data at its entry to a program,
 and encoding as the generation of any string in any format other than
 straight binary-safe data.

This touches on something that I've been a little concerned about, which
is the variety of definitions that people have for the same word.  We
struggle with that in CWE - which is why output encoding/escaping is in
the CWE name instead of, say, sanitization or validation.  I don't think
there's necessarily a solution [face it, we're not all going to adopt the
same terminology willingly], but it's a problem.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] SANS Institute - CWE/SANS TOP 25 Most Dangerous ProgrammingErrors

2009-01-13 Thread Steven M. Christey

On Tue, 13 Jan 2009, Gary McGraw wrote:

 I thought you might get a kick out of it.

I did! :-)  Always good to have debates.

Executives don't care about technical bugs

No, but they do what PCI says they have to (i.e. listen to the OWASP Top
Ten).  They do care about the bottom line.  They hate buying software and
finding out how crappy it is afterward.

The Siemens example comes to mind.  It was mind-boggling to me to hear
that they were forced to pay 150% of the original cost just to get the
software they bought in a secure form.

http://www.networkworld.com/news/2009/011209-software-security-effort.html?page=2


Too much focus on bugs.

The Top 25 has 4 or 5 items that are clearly design-related, maybe more if
you think that improper input validation is related to design, and still
more if you count all the external control items which may be
implementation or design depending on who's talking and what the
programmer's responsibilities are.  The Top 25 even has a couple classic
Saltzer-and-Schroeder examples in there (rephrased to avoid the incredible
confusion and misinterpretation that has gone on with the original SS).

Vulnerability lists help auditors more than developers.

Agree - except the Top 25 has anywhere from 3 to 10 specific
mitigations/preventions for each CWE.

And whether we like it or not, auditors help to drive change.  Maybe not
the optimal change, but they drive change.

Also - when the developers' software managers are told by their marketers
that they could lose many, then the managers will figure out how to get
the developers to improve.  Long-term thinking here - I know, that's not
allowed for this industry.

One person's top bug is another person's yawner.

Absolutely, a point I brought up in the other post I made to SC-L on some
interesting challenges.

Using bug parade lists for training leads to awareness but does not
educate.

Yep - which is why we want universities to get cracking, and if the Top 25
helps to prod them on, then so be it.

Bug lists change with the prevailing technology winds.

Yep - which is why the official name of the Top Ten begins with 2009.
We'll do this again next year.

Top ten lists mix levels.

Also brought up in my last email as a problem for the industry in general.

Regarding Seven Pernicious Kingdoms - how does the Top 25 map to them?  I
could classify most of the Top 25 in multiple categories.  Should poor
output encoding be put under the Input validation kingdom?  Sounds kind
of like using the Start button in Windows to shut down ;-)

Should cleartext transmission be put under Environment?  or Security
Features?

Again, we *all* have this problem.

Automated tools can find bugs . let them.

Yes, and a lesson of the Top 25 (that we all already know) is that when
people start to apply it, they'll see how a tool won't be a silver bullet.
Also covered in my last email...

When it comes to testing, security requirements are more important than
vulnerability lists.

Which is cover a teeny bit in the other email I sent.

Also, New York State has put up draft text that mentions the Top 25 as
part of a condition for acquisition.  Is that enough?  Hardly.  But things
like the Black/Williams software facts label aren't mature either, and
Dept. of Homeland Security probably has a couple years to go before their
work on assurance cases begins to take shape.

Ten is not enough.

Neither is 25.  Number 26 (another design issue) is also mentioned in the
other email I posted.

I'm of the mindset that the Top 25 is, short-term, an awareness tool for
developers and for customers.  In the longer term, maybe it will be a
little blip on the road to actionable software assurance.  Given that
approximately 1000 people have created delicious bookmarks for it, and
I've alread seen comments from a couple developers hey, we should go
check this out - then we are already seeing some success.

Gary, it might seem ironic since I am leading the most comprehensive bug
parade out there in CWE, but I agree with you that just following the bug
parade is not enough.  The Top 25 is a means to an end, and not the end
itself.  Only time will tell, though.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Some Interesting Topics arising from the SANS/CWE Top 25

2009-01-12 Thread Steven M. Christey

All, I'm the editor of the Top 25 list.  Thanks to Ken and others on SC-L
who provided some amazing feedback before its publication.  I hope we were
able to address most of your concerns and am sorry that we couldn't
address all of them.

Note that MITRE's site for the Top 25 is more technically detailed and has
lots of supporting documents than the SANS site, which is really a
jumping-off point.  See http://cwe.mitre.org/top25/.  Also, a process
document and changelog is on that site.

Here are some topics that arose during the construction of the Top 25. I
thought these might make some interesting points of debate or discussion
on this list:

1) The inclusion of output encoding (CWE-116) in conjunction with
   input validation (CWE-20) generated a lot of mixed reviews.  Part of it
   seems to come down to different ways of looking at the same problem.
   For example, is SQL injection strictly an input validation
   vulnerability, or output sanitization/validation/encoding or whatever
   you want to call it? In a database, the name O'Reilly may pass your
   input validation step, but you need to properly quote it before sending
   messages to the database.  And the actual database platform itself has
   no domain model to validate whether the incoming query is consistent
   with business logic.  My personal thinking, which seems reflected by
   many web application people, is that many injection issues are related
   to encoding at their core, and the role of input validation is more
   defense-in-depth (WITH RESPECT TO INJECTION ONLY).  Yet smart people
   insist that it's still input validation, even when presented with the
   example I gave.  So So what's the perspective difference that's causing
   the disconnect?

2) Countless mitigations were suggested by contributors on top of some
   of the ones already in the CWE entries (admittedly some of them
   weak).  Fortunately, we had time (for some definition of time
   that sometimes excluded a personal life) to update many of the core
   CWE entries.  Many mitigations had limitations, either in terms of
   impacts on usability, whether they could be applied at all in some
   circumstances, or if they were sufficiently effective.  The variety
   of customized mitigations is staggering, which to me suggests that
   more framework/methodology definition is needed.

3) Contributors advocated selecting list items based on how often the
   weakness appears in software (prevalence) and how severe the
   consequences are when the weakness leads to a vulnerable condition
   (severity).  Many people advocated using real-world data to make
   the determination for prevalence.  Problem: there's no real-world
   data available!  CVE vulnerability data is insufficient - they
   concentrate on the vulnerability side (XSS) instead of the
   weakness side (e.g. use of the wrong encoding at the wrong time).
   If people have real-world, weakness-focused data, then they aren't
   telling.

4) Some questions with respect to the assignment of severity scores
   led me to attempt to build a threat model and to try to more formally
   define other supporting fields like ease of detection, in light of the
   skilled, determined attacker.  I don't think this model was
   sufficiently vetted, and I'm sure people will have concerns with how
   it's been defined (including your threat model is really just talking
   about a threat agent.)  HOWEVER, I don't know of any other prioritized
   list that has tried to define some threat model to help with
   prioritization.  I would love to see this kind of investigation
   continue in other efforts.  (An acronym called CWSS comes to mind...)

5) The threat model, as roughly implied by how most people were
   voting for which items to include on the Top 25, treated availability
   as slightly less important than integrity and confidentiality.  Thus
   CWE-400 (Resource Consumption) had the dubious distinction of being
   number 26.  (CWE-400 and other also-rans are in the On the Cusp
   section of MITRE's Top 25 site). Clearly, availability may be more
   important in some environments e.g. critical infrastructure or
   e-commerce.  The unsurprising implications are that no single threat
   model will work for different types of organizations when composing a
   general list like the Top 25.  Thus it has a sort of fudge factor that
   helps make it generally applicable to organizations with varying threat
   environments, within some degree of tolerance.  It seems like a
   fundamental problem with a list of that sort.

6) Many expert reviewers will notice the varying abstraction levels
   and overlapping concepts in the list.  There are various explanations
   for this as summarized in the change logs and FAQ.  My main point in
   bringing this up was, a lot of people want things to be at a fixed
   level of abstraction and mutually exclusive, but I don't think it's
   feasible given our current understanding of security issues.  The 

[SC-L] CWE/SANS Top 25 Most Dangerous Programming Errors

2008-12-17 Thread Steven M. Christey

Since this is the week of the top-lists related to secure coding, I
thought I'd notify the SC-L people about a new collaboration between SANS
and MITRE.  We are creating a Top 25 list of the worst programming errors,
targeted largely at developers, software managers, and CIOs.

The list is not as high-level as the OWASP Top Ten, and not focused just
on web applications; it attempts to provide actionable details to
programmers with an informal tone.  Some SC-L subscribers are already
aware of it and have provided feedback.

The initial announcement was in late November; see
http://www.sans.org/resources/top25/

So far, we have reached out to and received input from major software
vendors, security tool vendors, consultants, the OWASP ESAPI group, and
others in industry, academia, and government.

We have one or two more rounds of review before the Top 25 list is
published in early January.

I'd been meaning to contact this list, but it slipped my mind until the
latest flurry of activity.  If you want to participate, feel free to
contact me and Bob Martin (ramar...@mitre.org) directly.

Thanks,
Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Software Assist to Find Least Privilege

2008-11-25 Thread Steven M. Christey

On Tue, 25 Nov 2008, Mark Rockman wrote:

 Assuming this is repeated for every use case, the resulting
 reports would be a very good guide to how CAS settings should be
 established for production.  Of course, everytime the program is changed
 in any way, the process would have to be repeated.

Better - and absoutely unachievable any time soon - would be for the
application itself to more explicitly state what its requirements of the
OS are, and what its intended behaviors are.  Kind of like SELinux but
simpler.  More easily said than done, but until we develop richer models
for representing what an application's legitimate behaviors are, then
automated detection of these types of issues are likely to be difficult.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Language agnostic secure coding guidelines/standards?

2008-11-17 Thread Steven M. Christey

The CWE Research view (CWE-1000) is language-neutral at its higher-level
nodes, and decomposes in some areas into language-specific constructs.
Early experience suggests that this view is not necessarily
developer-friendly, however, because it's not organized around the types
of concepts that developers typically think in.

http://cwe.mitre.org/data/definitions/1000.html

(click the Graph tab on the top right of the page to see the breakdown)

Obviously the CWE is a badness-ometer-pedia but suggests some areas that
your guidelines would hopefully address.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Wysopal says tipping point reached...

2008-11-06 Thread Steven M. Christey

On Tue, 4 Nov 2008, Benjamin Tomhave wrote:

 An interesting read. Not much to really argue with, I don't think.
 http://www.veracode.com/blog/2008/11/we%e2%80%99ve-reached-the-application-security-tipping-point/

Agree.  But, just to bolster (if it's relevant) I'll expand on my comment
to that blog post:

While we have not done a similar analysis in CVE, I believe that ISS'
statistics are valid based on what we are seeing.

Further, for the OS software vendors, the types of vulnerabilities are
often unusual (e.g. use-after-free, missing initialization) or very
difficult to find and exploit.  This suggests a significant difference
between the level of security at the OS level versus the application
level.  Generally speaking, of course.  (See the 2006 CVE vulnerability
trends for further proof of differences between OS and application stats;
yes, we'll be updating those stats for 2007/2008).

- Steve

P.S. the Veracode blog post generated 6 W3C validation errors, so it's
more authoritative than some other web pages.  Sorry if this joke doesn't
register with people, I forget which mailing list people will find this
postscript semi-hilarious/semi-cynical in.
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Lateral SQL injection paper

2008-04-29 Thread Steven M. Christey

On Tue, 29 Apr 2008, Joe Teff wrote:

  If I use Parameterized queries w/ binding of all variables, I'm 100%
  immune to SQL Injection.

 Sure. You've protected one app and transferred risk to any other
 process/app that uses the data. If they use that data to create dynamic
 sql, then what?

Let's call these using apps for clarity of the rest of this post.

I think it's the fault of the using apps for not validating their own
data.

Here's a pathological and hopefully humorous example.

Suppose you want to protect those using apps against all forms of
attack.

How can you protect every using app against SQL injection, XSS, *and* OS
command injection?  Protecting against XSS (say, by setting  to gt;
and other things) suddenly creates an OS command injection scenario
because  and ; typically have special meaning in Unix system() calls.
Quoting against SQL injection \' will probably fool some XSS protection
mechanisms and/or insert quotes after they'd already been stripped.

As a result, the only safe data would be alphanumeric without any spaces -
after all, you want to protect your user apps against whitespace,
because that's what's used to introduce new arguments.

But wait - buffer overflows happen all the time with long alphanumeric
strings, and Metasploit is chock full of alpha-only shellcode, so
arbitrary code execution is still a major risk.  So we'll have to trim the
alphanumeric strings to... hmmm... one character long.

But, a one-character string will probably be too short for some using
apps and will trigger null pointer dereferences due to failed error
checking.  Worse, maybe there's a buffer underflow if the using app does
some negative offset calculations assuming a minimum buffer size.

And what if we're providing a numeric string that the using app might
treat as an array index?  So, anything that looks like an ID should be
scrubbed to a safe value, say, 1, since presumably the programmer doesn't
allocate 0-size arrays.  But wait, a user ID of 1 is often used to
identify the admin in a using apps, so this would be tantamount to giving
everyone admin privileges!  We shouldn't accept any numbers at all.

And, we periodically see issues where an attacker can bypass a
lowercase-only protection mechanism by using uppercase, so we'd best set
the characters to all-upper or all-lower.

So, maybe the best way to be sure we're protecting using apps is to send
them no data at all (which will still trigger crashes in apps that assume
they'll be hearing from someone eventually).

Or, barring that, you pass along some meta-data that explicitly states
what protections have or have not been applied to the data you're sending
- along with an integrity check of your claims.

Of course, some using apps won't check that integrity and will accept
bad data from anywhere, not just you, so they'll be vulnerable again,
despite your best intentions.

The alternate approach is to pick and choose which vulns you'll protect
using apps against.  But then, if you've protected a using app against SQL
injection, but it moves to a non-database model instead, you've just
broken your legitimate functionality.  So, you're stuck with modeling
which using apps are using which technologies and might be subject to
which vulns.  You will also need a complete model of what the using app's
behaviors are, and you'll need to keep different models for each different
version and operating environment.  This will become brittle and quickly
unmaintainable, and eventually introduce unrelated security issues as a
result of that brittleness.

To my current way of thinking, the two main areas of responsibility are:

- for the caller to make sure that the request/message is perfectly
structured and delimited, and semantically correct for what the caller is
asking the callee to do.  The current browser URI handler vulnerabilities,
and argument injection in general, are examples of violations of this
responsibility.

- for the caller, given any arbitrary message/request, to prove (or
enforce) that it is well-formed, to make sure that the caller has the
appropriate privileges to make that message/request in the first place,
and to protect itself against SQL injection when interacting with a DB,
against XSS when printing out to a web page, etc.


I recognize that you might not have a choice with stovepipe or legacy
applications, or in proxy/firewall code that resides between two
components.  I feel for anyone wrestling with those problems.  But,
protect using apps against themselves as general advice seems fraught
with peril.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.

Re: [SC-L] Programming language comparison?

2008-02-05 Thread Steven M. Christey

On Mon, 4 Feb 2008, ljknews wrote:

  (%s to fill up disk or memory, anybody?), so it's marked with
  All and it's not in the C-specific view, even though there's a heavy
  concentration of format strings in C/C++.

 It is marked as All ?

 What is the construct in Ada that has such a risk ?

H, I don't see any, but then again I don't know Ada.  Is there no
equivalent to format strings in Ada?  No library support for it?

Your question actually highlights the point I was trying to make - in CWE,
we don't yet have a way of specifying language families, such as any
language that directly supports format strings, or any language with
dynamic evaluation.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Open Source Code Contains Security Holes -- Open Source -- InformationWeek

2008-01-10 Thread Steven M. Christey

Another question is how many of the reported bugs wound up being false
positives.  Through casual conversations with some vendor (I forget whom),
it became clear that the massive number of reported issues was very
time-consuming to deal with, and not always productive.  Of course this is
no surprise to people on this list, but important to note.

Regarding vendor responses - through my work in CVE, I've noticed that
eventually, a developer who's been tagged often enough will eventually
develop more systematic responses such as secure APIs, coding standards,
or at least a thorough review.  This is briefly touched on in the
Unforgivable Vulnerabilities paper that I gave at Black Hat USA last year,
where I discuss vulnerability complexity as a qualitative indicator of
software security.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Insecure Software Costs US $180B per Year - Application and Perimeter Security News Analysis - Dark Reading

2007-11-30 Thread Steven M. Christey

On Fri, 30 Nov 2007, Shea, Brian A wrote:

 Software vendors will need a 3 tier approach to software security:  Dev
 training and certification, internal source testing, external
 independent audit and rating.

I don't think I've seen enough emphasis on this latter item.  A
sufficiently vibrant set of independent testing organizations that follows
some established procedures would be one way for customers to get an
independent guarantee of software's (relative) security.  This in turn
could put pressure on other vendors to follow suit.

The challenges would be defining what those procedures should be,
maintaining them in a way so that they remain relevant, convincing
existing research organizations to participate, and handling the problem
of free (as in beer) software.

A gazillion years ago, John Tan of the L0pht proposed an Underwriters
Laboratories for software, and maybe its time is almost upon us.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Microsoft Pushes Secure, Quality Code

2007-10-08 Thread Steven M. Christey

Interesting that attack surface isn't included, given that Microsoft was
one of the earliest advocates of attack surface, a metric that is likely
strongly associated with the number of input-related vulnerabilities.
It's probably hard to do perfectly, though, especially if any third-party
APIs are involved.

Are there any tools out there that try to measure attack surface?  Has
anybody had any experience in trying to apply it?

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Microsoft Pushes Secure, Quality Code

2007-10-08 Thread Steven M. Christey

On Mon, 8 Oct 2007, Gary McGraw wrote:

 Not surprising.  Last time I looked, attack surface is subjective.
 McCabe is not.  BTW, McCabe's Cyclomatic complexity boils down to 85%
 lines of code and 15% data flow if you do a principal component analysis
 on it.

Hopefully the SEI people are monitoring this list and can provide their
feedback.  They've done some concrete work in making attack surface as
objective as possible, enough to the point where they compared 2 FTP
servers about a year ago.  One of their papers comments that they wanted
to use the code scanners to make the calculations for them, but for some
reason they couldn't.

I was under the impression from Mike Howard's comments over the years,
that MS had some concrete (perhaps subjective) comparisons between
different MS variants, and this was part of the argument for Vista's
security over past MS operating systems.

 Just throw the code in the box and turn the crank.  Then discard the
 results and you're done!

While I understand the sentiment, it seems to me that you can't get very
far without metrics of some sort.  Perhaps more importantly, the real
decision-makers need them because it's not their job (and probably not
their expertise) to pore through endless details.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] CWE Researcher List

2007-09-06 Thread Steven M. Christey

All,

I figured people on this list might be interested in this.  If you have
any concerns or suggestions about CWE, the upcoming months will be the
best time to raise them in a focused discussion forum, the CWE Researcher
List.

If you don't know what CWE is, then shame on me for not pimping it enough:
http://cwe.mitre.org

- Steve

--

MITRE has established a list for researchers and other parties who are
interested in detailed discussion of the Common Weakness Enumeration
(CWE).  CWE Draft 6 has over 600 nodes and is seeing increased usage. Over
the summer of 2007, MITRE has identified some themes within CWE that we'd
like to discuss with the community.  So, it's time for some focused
feedback.

Beginning next week, we will be using this CWE researcher list to conduct
in-depth technical discussions: where CWE is now, what are its strengths
and limitations, and where it needs to be.  We will bring up these larger
discussion points, solicit feedback, and modify CWE accordingly.  (Don't
worry, we won't bother you with the hundreds of minor edits we'll be
doing!)

We think that you could be an important contributor to the success of
CWE, so we are inviting you to join the list.  Your participation is
encouraged but not required.

To subscribe, see:

  http://cwe.mitre.org/community/registration.html

or just send an email to [EMAIL PROTECTED] with the command:

  subscribe CWE-RESEARCH-LIST

Note: posts to the list will be publicly archived.

We hope you can join!


Thank you,

Steve Christey
CWE Technical Lead

Bob Martin
CWE Project Lead
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Steven M. Christey

On Tue, 26 Jun 2007, Kenneth Van Wyk wrote:

 Mind you, the overrun can only be exploited when specific characters
 are used as input to the loop in the code.  Thus, I'm inclined to
 think that this is an interesting example of a bug that would have
 been extraordinarily difficult to find using black box testing, even
 fuzzing.

I would assume that smart fuzzing could have lots of manipulations of
the HH:mm:ss.f format (the intended format mentioned in the advisory), so
this might be findable using black box testing, although I don't know how
many fuzzers actually know how to muck with time strings.  Because the
programmer told flawfinder to ignore the strncpy() that it had flagged, it
also shows a limitation of manual testing.

In CVE anyway, I've seen a number of overflows involving strncpy, and
they're not all off-by-one errors.  They're hard to enumerate because we
don't usually track which function was used, but here are some:

CVE-2007-2489 - negative length

CVE-2006-4431 - empty input causes crash involving strncpy

CVE-2006-0720 - incorrect strncpy call

CVE-2004-0500 - another bad strncpy

CVE-2003-0465 - interesting API interaction


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-26 Thread Steven M. Christey

 On 6/26/07 4:25 PM, Wall, Kevin [EMAIL PROTECTED] wrote:

 I mean, was the fix really rocket science that it had to take THAT
 LONG??? IMHO, no excuse for taking that long.

Some major vendor organizations, most notably Oracle and Microsoft, have
frequently stated that they can't always fix even simple vulnerabilities
instantly, because they have batteries of tests and platforms to verify
that the fix won't damage anything else.  I can see why this would be the
case, although I rarely hear vendors talk about what they're doing to make
their response time faster.  Open source vendors likely have similar
challenges, though maybe not on such a large scale.

I'd be interested to hear from the SDLC/CMM consultant types who work with
vendors on process, about *why* this is the case.

And in terms of future challenges: how can the lifecycle process be
changed so that developers can quickly and correctly fix show-stopping
issues (including/especially vulnerabilities)?  It would seem to me that
one way that vendors can compete, but don't, is in how quickly and
smoothly they fix issues in existing functionality, which might be a large
part of the operational expenses for an IT consumer.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Steven M. Christey

On Mon, 11 Jun 2007, Crispin Cowan wrote:

 Gary McGraw wrote:
  Though I don't quite understand computer science theory in the same way 
  that Crispin does, I do think it is worth pointing out that there are two 
  major kinds of security defects in software: bugs at the implementation 
  level, and flaws at the design/spec level.  I think Crispin is driving at 
  that point.
 
 Kind of. I'm saying that specification and implementation are
 relative to each other: at one level, a spec can say put an iterative
 loop here and implementation of a bunch of x86 instructions.

I agree with this notion.  They can overlap at what I call design
limitations: strcpy() being overflowable (and C itself being
overflowable) is a design limitation that enables programmers to make
implementation errors.  I suspect I'm just rephrasing a tautology, but
I've theorized that all implementation errors require at least one design
limitation.  No high-level language that I know of has a built-in
mechanism for implicitly containing files to a limited directory (barring
chroot-style jails), which is a design limitation that enables a wide
variety of directory traversal attacks.

If you have a standard authentication algorithm with a required step that
ensures integrity, then a product that doesn't perform this step has an
implementation bug at the algorithm's level - but if the developers didn't
even bother putting this requirement into the design, then at the product
level, it's a design problem.  Or something like that.

  If we assumed perfection at the implementation level (through better
  languages, say), then we would end up solving roughly 50% of the
  software security problem.
 
 The 50% being rather squishy, but yes this is true. Its only vaguely
 what I was talking about, really, but it is true.

For whatever it's worth, I think I agree with this, with the caveat that I
don't think we collectively have a solid understanding of design issues,
so the 50% guess is quite squishy.  For example, the terminology for
implementation issues is much more mature than terminology for design
issues.

One sort-of side note: in our vulnerability type distributions paper
[1], which we've updated to include all of 2006, I mention how major Open
vs. Closed source vendor advisories have different types of
vulnerabilities in their top 10 (see table 4 analysis in the paper).
While this discrepancy could be due to researcher/tool bias, it's probably
also at least partially due to development practices or language/IDE
design.  Might be interesting for someone to pursue *why* such differences
occur.

- Steve

[1] http://cwe.mitre.org/documents/vuln-trends/index.html
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Steven M. Christey

I agree with Ryan, at the top skill levels anyway.  Binary reverse
engineering seems to have evolved to the point where I refer to binary as
source-equivalent, and I was told by some well-known applied researcher
that some vulns are easier to find in binary than source.

But the bulk of public disclosures are not by top researchers, so I'd
suspect that in the general field, source inspection is more accessible
than binary.  So with closed source, people are more likely to use black
box tools, which might not be as effective in finding things like format
string issues, which often hide in rarely triggered error conditions but
are easy to grep for in source.  And maybe the people who have source code
aren't going to be as likely to use black box testing, which means that
obscure malformed-input issues might not be detected.  This is probably
the general researcher; the top researcher is more likely to do both.

Since techniques vary so widely across individuals and researcher bias is
not easily measurable, it's hard to get a conclusive answer about whether
there's a fundamental difference in the *latent* vulns in open vs. closed
(modulo OS-specific vulns), but the question is worth exploring.

On Tue, 12 Jun 2007, Blue Boar wrote:

 Crispin Cowan wrote:
  Do you suppose it is because of the different techniques researchers use
  to detect vulnerabilities in source code vs. binary-only code? Or is
  that a bad assumption because the hax0rs have Microsoft's source code
  anyway? :-)

 I'm in the process of hiring an outside firm for security review of the
 product for the day job. They didn't seem particularly interested in the
 source, the binaries are sufficient. It appears to me that the
 distinction between source and object is becoming a bit moot nowadays.


   Ryan

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] The Specifications of the Thing

2007-06-12 Thread Steven M. Christey

On Tue, 12 Jun 2007, Michael S Hines wrote:

 So - aren't a lot of the Internet security issues errors or omissions in the
 IETF standards - leaving things unspecified which get implemented in
 different ways - some of which can be exploited due to implementation flaws
 (due to specification flaws)?

This happens a lot in interpretation conflicts [1] that occur in
intermediaries - proxies, IDses, firewalls, etc. - where they have to
interpret traffic/data according to how the end system is expected to
treat that data.  Incomplete specifications, or those that leave details
for an implementation, will often result in end systems that have
different behaviors based on the same input data.  nmap's OS detection
capability is an obvious example; Ptacek/Newsham's classic IDS evasion
paper is another.

Many of the anti-virus or spam bypass vulns being reported are of this
flavor (although lately, researchers have realized that they don't always
have to bother with interpretation conflicts when the products have
obvious overflows).

Non-standard implementations make the problem even worse, because then
they're not even acting like they're expected to, as we often see in
esoteric XSS variants.

- Steve

[1] interpretation conflict is my current term for
http://cwe.mitre.org/data/definitions/436.html
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Perspectives on Code Scanning

2007-06-07 Thread Steven M. Christey

On Thu, 7 Jun 2007, Michael Silk wrote:

 and that's the problem. the accountability for insecure coding should
 reside with the developers. it's their fault [mostly].

The customers have most of the power, but the security community has
collectively failed to educate customers on how to ask for more secure
software.  There are pockets of success, but a whole lot more could be
done.

From a developer-focused perspective, we need to deal with (1)  ensuring
that developers KNOW how to produce secure code (or interpret tool
results), but then (2) actually produce the secure code within given
deadlines.  I know that (2) is a common topic on this list, but deadlines
won't change until customers force the issue, which currently requires
weaning them from featuritis, which has such low prospects of success that
it's starting to depress me, so I'll stop and we've talked about this
before anyway.

  It would seem to be that tools that developers plug into their IDE
  should be free since the value proposition should reside elsewhere.

I personally love this sentiment, but that's not how the current market is
working, and I'm not sure how it would shift to that point.  There might
be lessons from the anti-virus community's long history (nowadays mostly
covering the same stuff usin a subscription model, but they still compete
on speed more than quality of information to the end user).  I don't know
what the vuln scanning tool indusry is up to these days (Nessus, Retina,
etc.) but I do know that management-friendly reporting was the bane of
that technology's existence for years.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] What's the next tech problem to be solved in software security?

2007-06-07 Thread Steven M. Christey

On Wed, 6 Jun 2007, Wietse Venema wrote:

 more and more people, with less and less experience, will be
 programming computer systems.

 The challenge is to provide environments that allow less experienced
 people to program computer systems without introducing gaping
 holes or other unexpected behavior.

I completely agree with this.  This is a grand challenge for software
security, so maybe it's not the NEXT problem.  There's a lot of tentative
work in this area - safe strings in C, SafeInt,
StackGuard/FormatGuard/etc., non-executable data segments, security
patterns, and so on.  But these are bolt-on methods on top of the same
old languages or technologies, and some of these require developer
awareness.  I know there's been some work in secure languages but I'm
not up-to-date on it.

More modern languages advertise security but aren't necessarily
catch-alls.  I remember one developer telling me how his application used
Ruby on Rails, so he was confident he was secure, but it didn't stop his
app from having an obvious XSS in core functionality.

 An example is the popular PHP language. Writing code is comparatively
 easy, but writing secure code is comparatively hard. I'm working on
 the second part, but I don't expect miracles.

PHP is an excellent example, because it's clearly lowered the bar for
programming and has many features that are outright dangerous, where it's
understandable how the careless/clueless programmer could have introduced
the issue.  Web programming in general, come to think of it.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Tools: Evaluation Criteria

2007-05-22 Thread Steven M. Christey

On Tue, 22 May 2007, McGovern, James F (HTSC, IT) wrote:

 We will shortly be starting an evaluation of tools to assist in the
 secure coding practices initiative and have been wildly successful in
 finding lots of consultants who can assist us in evaluating but
 absolutely zero in terms of finding RFI/RFPs of others who have
 travelled this path before us. Would especially love to understand
 stretch goals that we should be looking for beyond simple stuff like
 finding buffer overflows in C, OWASP checklists, etc.

semi-spam: With over 600 nodes in draft 6, the Common Weakness Enumeration
(CWE) at http://cwe.mitre.org is the most comprehensive list of
vulnerability issues out there, and it's not just implementation bugs.
That might help you find other areas you want to test.  In addition, many
code analysis tool vendors are participating in CWE.

 In my travels, it feels as if folks are simply choosing tools in this
 space because they are the market leader, incumbent vendor or simply
 asking an industry analyst but none seem to have any deep criteria. I
 guess at some level, choosing any tool will move the needle, but
 investments really should be longer term.

Preliminary CWE analyses have shown a lot less overlap across the tools
than expected, so even baased on vulnerabilities tested, this is an
important consideration.

You might also want to check out the SAMATE project (samate.nist.gov),
which is working towards evaluation and understanding of tools, although
it's a multi-year program.

Finally, Network Computing did a tool comparison:


http://www.networkcomputing.com/article/printFullArticle.jhtml?articleID=198900460

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Darkreading: Secure Coding Certification

2007-05-14 Thread Steven M. Christey

On Mon, 14 May 2007, McGovern, James F (HTSC, IT) wrote:

 1. ONLY consultants and vendors have jumped on the bandwagon. Other IT
 professionals such as those who work in large enterprises have no
 motivation to pursue.

Only vendors have jumped on the bandwagon?  The software developers are
the ones we WANT jumping on the bandwagon.

But it's not just those two.  The initial announcement of these exams
featured representatives from several large US government organizations
who said they need this.  Other major US organizations need this and
want this, but they aren't saying so publicly.  SANS did a survey of over
300 organizations that included a lot of software consumers.

 3. It needs to be more language agnostic. Folks who code in Smalltalk,
 Ruby or scripting languages should not be treated as second class
 citizens

The current tests are designed to handle specific skills in specific,
prominent languages.   Other tests might come out as a result of demand.

 4. I would not measure experience but desire to pursue knowledge.

This would be great, but I'm not sure how you could actually test it.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Darkreading: Secure Coding Certification

2007-05-14 Thread Steven M. Christey

On Sat, 12 May 2007, ljknews wrote:

 but based on biases I see on this list, I tend to believe that those
 who make such a certification scheme would bias it toward:

   Programming done in C and derivative languages (C++, Java, etc.)

   Programming relying on TCP/IP

 neither of which is relevant to my endeavors.

The test is intended to cover the language areas and programming idioms
that are most likely to be taught at the university level and used by
programmers with only a couple years' experience.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-21 Thread Steven M. Christey

On Wed, 21 Mar 2007, mudge wrote:

 Sorry, but I couldn't help but be reminded of an old L0pht topic that
 we brought up in January of 1999. Having just re-read it I found it
 still relatively poignant: Cyberspace Underwriters Laboratories[1].

I was thinking about this, too, I should have remembered it in earlier
comments.  The fact that such a thing has NOT come to fruition seems to be
symptomatic of the industry, although there have been some partnerships
between commercial and non-commercial entities (e.g. Fortify and the Java
Open Review).

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-21 Thread Steven M. Christey

I was originally going to say this off-list, but it's not that big a deal.

Arian J. Evans said:

 I think you are on to something here in how to think about this subject.
 Perhaps I should float my little paper out there and we could shape up
 something worth while describing how the industry is evolving today.

I've been wanting to do something along these lines but don't have much
time.  I'll gladly review it or provide suggestions.  I have a draft on
current disclosure practices that includes the diversity of researchers
and the role of vulnerability information providers.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Economics of Software Vulnerabilities

2007-03-19 Thread Steven M. Christey

On Mon, 19 Mar 2007, Crispin Cowan wrote:

 Since many users are economically motivated, this may explain why users
 don't care much about security :)

But... but... but...

I understand the sentiment, but there's something missing in it.  Namely,
that the costs related to security are not really quantifiable yet, so
consumers are not working with the best information.  Then there's simple
lack of understanding, such as that exmplified by an individual consumer -
their computer gets really bogged down and slow, and they don't know
what's happening, so they go buy a new computer, when it was just a ton
of spyware from surfing habits that they didn't know were unsafe, or they
were running some zombie that was sucking up all their bandwidth for warez
distribution.

  Eventually I think they'll get fed up and there'll be a consumer uprising.
 
 Why do you think it will be an uprising? Why not a gradual shift of the
 vendors just get better, exactly as fast as the users need them to?

I really really wish for an uprising, but unfortunately I'm not too
optimistic right now.  Off the top of my head, I can't think of any
consumer uprisings in other industries, although the US' recent decline in
fuel-inefficient vehicles is sort of close.  Didn't some large
brick-and-mortar companies heavily criticize the software industry a
couple years ago?  I don't know how that played out.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Information Protection Policies

2007-03-10 Thread Steven M. Christey

On a slightly tangential note, and apologies if this was mentioned on this
list previously, OWASP has some guidelines on how consumers can write up
contracts with their vendors related to secure software:

http://www.owasp.org/index.php/OWASP_Secure_Software_Contract_Annex

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] What defines an InfoSec Professional?

2007-03-08 Thread Steven M. Christey

On Thu, 8 Mar 2007, Greg Beeley wrote:

 Perhaps one of the issues here is that if you are in operations work
 (network security, etc.), there are more aspects of the CISSP that are
 relevant to your daily work.  In software development, there is usually
 just the one - app development sec - that the developer thinks about,
 unless the code has inherent security functionality, in which case
 access control, architecture/models, and cryptography can be important
 too.

Secure development certification will hopefully come to the marketplace in
droves in the next year or two.  One organization is
not-so-privately-but-technically-not-yet-publicly preparing to roll
something out in the coming months, and hopefully that will inspire
others.  Insert obligatory cert disclaimer here, but geez it's badly
needed to raise the bar even a hair.

 developer meet, to be a security professional?  Should there be
 something like the Common Criteria EAL's, but somewhat less formal,
 to encourage broader use in labeling projects and code, esp. in the
 open-source world?

Dave Litchfield and I have *very* casually investigated forming a CC-like
concept of Vulnerability Assessment Assurance Levels (VAAL) which is
intended to reflect the depth of a vuln researcher's analysis as some
crude but semi-repeatable measure of assurance.  i've also done some
thinking about vulnerability complexity, and I assume I've mentioned my
vulnerability theory work on this list since I never shut up about it.
Such concepts could be turned around to reflect the depth of understanding
that a developer has - e.g. they know enough to try to strip out SCRIPT
tags but they don't know about javascript: in IMG tags.  I have a couple
pages of working notes on VAAL for offline dissemination for interested
parties who promise to give me feedback.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Disclosure: vulnerability pimps? or super heroes?

2007-03-07 Thread Steven M. Christey

Based on my general impressions in day-to-day operations for CVE (around
150 new vulns a week on average), maybe 40-60% of disclosures happen
without any apparent attempt at vendor coordination, another 10-20% with a
communication breakdown (including they didn't answer in 2 days), and
the rest coordinated.  A bit of a guess there, though.

The only remotely relevant survey that I can think of was by me and
Barbara Pease, 6 years ago in 2001, and we were reduced to qualitative
analysis because data collection turned out to be too expensive, and this
was focused on vendor acknowledgement (which holds steady at 50% no matter
what the year).  But disclosure timelines are thankfully more prevalent
these days, so an updated study would be more illuminating.  I'm looking
forward to Richard Forno's study of vuln researchers whenever it comes
out.

For obligatory SC-L content: this is one reason why I think vendor
development/maintenance processes need to be prepared for non-coordinated
disclosures.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Disclosure: vulnerability pimps? or super heroes?

2007-03-05 Thread Steven M. Christey

On Tue, 27 Feb 2007, J. M. Seitz wrote:

 Always a great debate, I somewhat agree with Marcus, there are plenty of
 pimps out there looking for fame, and there are definitely a lot of them
 (us) that are working behind the scenes, taking the time to help the vendors
 and to stay somewhat out of the limelight.

Do the people who write the books to avoid the vulns, sell the tools, and
give talks at conferences stay out of the limelight as well?  What about
all those podcasts?  They should be discounted too, since they're clearly
pimping something.  They must have ulterior motives.  Don't get me started
on those rabble-rousers who complain about voting machine security.

Not that I don't have issues with how disclosure happens sometimes, but
the anti-researcher sentiment that castigates them based on looking for
fame by people who are themselves famous strikes me as a bit
hypocritical.  Why do we know that Marcus designed the White House's first
firewall?  'cause he told us, that's why.

We're very lucky that assumed fame-hunters like Cesar Cerrudo and David
Maynor have decided that they won't bother telling the vendor about vulns
they find because of all the trouble it gets them into.  It's quite
unfortunate that Litchfield has almost single-handedly dared to question
Oracle's claim that it's unbreakable.  Perhaps we would prefer that these
pimpers stop giving us disclosure timelines that show that they notified
vendors about issues months or YEARS before the vendors actually got
around to fixing them.  We can go back to security through obscurity, the
old fashioned way, by lawsuits and threats.  Like what happened at Black
Hat last week, but with less press.

Basically, I have an issue with the criticism of this aspect of researcher
pimpage when it's usually the pot calling the kettle black, when most of
us are getting paid one way or another for this work, and there's a
pervasive inability to recognize that many such researchers feel forced to
disclose when the vendor still does nothing.  And many researchers aren't
in it for the fame, which is the assumption that the pimpage argument is
based on.

Sorry, must be a case of the Mondays combined with this building up over a
year or two.  The vuln researchers are the only parts of this business who
get no respect.

- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Off-by-one errors: a brief explanation

2004-05-06 Thread Steven M. Christey

[EMAIL PROTECTED] said:

 that wasnt the question- well 'not how can overwritting 5 bytes help
 you', but what error do you code thats a miscount by 5 bytes?

The off-by-one errors I am familiar with have manipulated character
arrays, so each element is one byte long.  When the index is off by
one, you can write one extra byte.

If you have an array of data structures that are 5 bytes each, then an
off-by-one error (i.e., off by one *index*) gives you 5 bytes to
work with.  I don't know if any vulnerabilities of this flavor have
been publicized, but I vaguely recall some classic buffer overflow
vulnerabilities have involved multi-byte structures instead of
single-byte characters.

However, upon some investigation, it looks like there might be some
inconsistent terminology going around.

The only off-by-five error that I could find was reported for sudo
by Global InterSec Research on April 2002:

   BUGTRAQ:20020402 [Global InterSec 2002041701] Sudo Password Prompt
   URL:http://marc.theaimsgroup.com/?l=bugtraqm=101974610509912w=2

   original advisory at:

 http://www.globalintersec.com/adv/sudo-2002041701.txt

This problem was *not* due to an index problem, which seems to be the
core of what I call an off-by-one issue.

In this off-by-five case, the researchers conclude: it is possible
to trick sudo into allocating less memory than it should for the
prompt.  In this case, sudo does not properly handle certain
expansion characters in a string, which causes the string to be longer
than expected.

To me, that seems like a different kind of issue than an off-by-one
index error, at least as it appears in the source code.

So, the off-by-five problem is, in my opinion, a misnomer - at least
from the perspective of the underlying programming error.  From the
exploit perspective, it's fine.

And this is one of the reasons why, at CanSecWest this year, I
mentioned that we need to be more precise about terminology :-)

- Steve




[SC-L] Off-by-one errors: a brief explanation

2004-05-05 Thread Steven M. Christey

Mads Rasmussen [EMAIL PROTECTED] said:

I for one have difficulties understanding the off-by-one 
vulnerability. Maybe a kind soul would step in?

I'll try to tackle this.  Corrections or additions are most welcome :)

In general, off-by-one bugs involve small errors in which an array of
size N is accessed using an index of N - but since an index is
0-based in C, the maximum index for the array is N-1.  So, N is
actually one byte outside the range of the array.  I haven't dug
deeply into the details, but there are probably a couple variants.

When manipulating strings using functions like strcpy, this means that
the terminating null byte is written outside of the buffer, in some
other memory location that might have security implications if that
null is interpreted as a 0.  Or, that memory location is overwritten
after the null was inserted (say, by a string copy to another
variable), so the null character is removed.  Then, a function that
processes that string will keep accessing memory until it hits a 0
byte.

Functions like strncpy can also be vulnerable to off-by-ones.  If the
input is exactly size N, then strncpy doesn't add a terminating null
byte.

Any kind of C array can be susceptible to off-by-ones, not just
strings.  And the use of terminators isn't necessarily required.  For
example, if a programmer has an array of data structures, its length
might be stored in a separate variable, rather than relying on a
terminator value to signify the last element of the array.

The bug isn't always exploitable for code execution.  For example,
sensitive data could be leaked from nearby memory locations due to a
missing null terminator.

Some documents that touch on off-by-ones include:

  Halvar Flake's presentation at Black Hat Europe 2001 on Third
  Generation Exploits on NT/Win2k Platforms, which includes buffer
  overflows, heap/free() and off-by-one errors:


http://www.blackhat.com/presentations/bh-europe-01/halvar-flake/bh-europe-01-halvarflake.ppt

This includes a nice graphic representation of the problem at the
stack level, touching on how portions of return addresses can be
overwritten.

  The following Bugtraq post by Vade 79 gives an alternate description
  of off-by-ones, along with an example that causes potentially
  sensitive memory to be read and copied into a string because of the
  missing terminator.

BUGTRAQ:20030727 [PAPER]: Address relay fingerprinting.
URL:http://marc.theaimsgroup.com/?l=bugtraqm=105941103709264w=2

  The following Bugtraq post by Jedi/Sector One gives something of a
  good demonstration if you read between the lines in the code:

BUGTRAQ:20020624 Apache mod_ssl off-by-one vulnerability
URL:http://marc.theaimsgroup.com/?l=bugtraqm=102513970919836w=2

In this example, a buffer is allocated 1024 bytes, and there is a
conditional in a loop which tests if i  1024.  However, after
that loop exits, index i in the array is modified.

  Olaf Kirch's Bugtraq post The poisoned NUL byte seems to be an
  early report of the security implications of an off-by-one error:

BUGTRAQ:19981014 The poisoned NUL byte
URL:http://www.securityfocus.com/archive/1/10884

  Here are some more source code examples, from Bugtraq posts by
  Janusz Niewiadomski:

BUGTRAQ:20030714 Linux nfs-utils xlog() off-by-one bug
URL:http://marc.theaimsgroup.com/?l=bugtraqm=105820223707191w=2

BUGTRAQ:20030731 wu-ftpd fb_realpath() off-by-one bug
URL:http://marc.theaimsgroup.com/?l=bugtraqm=105967516807664w=2


- Steve