Re: [SC-L] InformIT: comparing static analysis tools

2011-02-03 Thread John Steven
All,

I followed this article up with a blog entry, more targeted at adopting 
organizations. I hope you find it useful:

http://www.cigital.com/justiceleague/2011/02/02/if-its-so-hard-why-bother/


John Steven
Senior Director; Advanced Technology Consulting
Desk: 703.404.9293 x1204 Cell: 703.727.4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven
http://www.cigital.com
Software Confidence. Achieved.


 hi sc-l,
 
 John Steven and I recently collaborated on an article for informIT.  The 
 article is called Software [In]security: Comparing Apples, Oranges, and 
 Aardvarks (or, All Static Analysis Tools Are Not Created Equal) and is 
 available here:
 
 http://www.informit.com/articles/article.aspx?p=1680863
 
 
 Now that static analysis tools like Fortify and Ounce are hitting the 
 mainstream there are many potential customers who want to compare them and 
 pick the best one.  We explain why that's more difficult than it sounds at 
 first and what to watch out for as you begin to compare tools.  We did this 
 in order to get out in front of test suites that purport to work for tool 
 comparison.  If you wonder why such suites may not work as advertised, read 
 the article.
 
 Your feedback is welcome.



smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-12 Thread John Steven
-provided cut points and ESAPI code. For instance, one can 
simulate [the dreaded] 'multiple inheritance' of both Struts and ESAPI base 
classes by using the template method pattern within a sub-class of (say) the 
struts-provided class, which, implementing the template method pattern, would 
call security controls (such as validation or the largely vestigial ESAPI 
authentication checks) before handing off to end-application developer code 
that handles other controller functionality/business logic.

Personally, for me, the strategy of tacking ESAPI calls onto a developer's 
application code manually on a case-by-case basis without techniques described 
above is bound for failure. Developers simply won't be able to reach the total 
consistency required for robust defense in a large existing application. If 
you're going to walk this road though for the love of God please deploy SAST to 
make sure that something is sweeping through and looking for that ever-elusive 
consistency of application I describe.  
 
 And this is not just a wild idea, I'm lucky to witness some of the
 largest institutions on the planet sucessfully implement ESAPI in the
 real world.
 
 And sure, you can build a new secure app without an ESAPI. But libs
 like OWASP ESAPI will get you there faster and cheaper.

I'd be very-much interested in data regarding faster and cheaper. With the 
exception of the input validation, canonicalization, and related functionality 
(*5) it seems like a lot of analysis and integration jobs remain when adopting 
ESAPI. I'd also like to know about bug rates relative to non-ESAPI code. I've 
been on the ESAPI mailing list for a while and can't discern from conversation 
much information regarding successful operationalization, though I hear 
rumblings of people working on this very problem. 

Cheers all,

John Steven
Senior Director; Advanced Technology Consulting
Desk: 703.404.9293 x1204 Cell: 703.727.4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven
http://www.cigital.com
Software Confidence. Achieved.

(*1) This same argument scales down into the platform further: Java EE core, 
underlying OS / hypervisor... etc. 

(*2) Message dated: Jan 2010 00:04:49, ESAPI-Users Mailing list

(*3) Quoting Manico:
 I've used regular expression multi-file-search-and-replace tricks across many 
 million LOC applications

In the case of (*3), I prefer AOP or SA to RegExp because of the provided 
type-safety of the two approaches and the option to log transforms. Though, for 
some cut-points, RegExp may be sufficient to accurately 'lock in' a cut-point.

(*4) Unlike the purely struts-based approach though, developers have to 
remember the annotation and change does necessitate recompilation and 
redeployment (be it Python, Java or .NET).

(*5) These areas appear to have received the lion share of attention to date 
(rightfully so, and to great avail) 

 On Jan 7, 2010, at 1:02 PM, John Steven jste...@cigital.com wrote:
 
 Jim,
 
 Yours was the predicted response. The ref-impl. to API side-step
 does not fix the flaw in the argument though.
 
 No, you do not need A ESAPI to build secure apps.
 
 Please re-read my email carefully.
 
 Alternatives:
 1) Some organizations adopt OWASP ESAPI's ref-impl.
 2) Others build their own do agree and see the value; yes
 
 #1 and #2 agree with your position.
 
 3) Some secure their toolkits (again, a la secure struts)
 
 Indicating such a secure struts is an organization's ESAPI
 perverts the ESAPI concept far too greatly to pass muster. Indeed,
 were it to, it would violate properties 3 and 4 (and very likely 2)
 within my previous email's advantage list.
 
 Mr. Boberski, you too need to re-read my email. I advise you
 strongly not to keep saying that ESAPI is like PK-enabling an APP.
 I don't think many people got a good feeling about how much they
 spent on, or how effective their PKI implementation was ;-). Please
 consider how you'd ESAPI-enable the millions of lines of underlying
 framework code beneath the app.
 
 4) Policy + Standards, buttressed with a robust assurance program
 
 Some organizations have sufficiently different threat models and
 deployment scenarios within their 'four walls' that they opt for
 specifying an overarching policy and checking each sub-
 organization's compliance--commensurate with their risk tolerance
 and each app deployment's threat model. Each sub-organization may-or-
 may-not choose to leverage items one and two from this list. I
 doubt, however, you'd argue that more formal methods of verification
 don't suffice to perform 'as well' as ESAPI in securing an app (BTW,
 I have seen commercial implementations opt for such verification as
 an alternative to a security toolkit approach). Indeed, an single
 security API would likely prove a disservice if crammed down the
 throats of sub-organizations that differ too greatly.
 
 At best

Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-07 Thread John Steven
 a fair amount of exoneration from 
normal processes (Some of which is OK, but a lot can be dangerous). Second, 
please make sure it's actually secure--it will be a fulcrum of your security 
controls' effectiveness. Make sure that assessment program proves your 
developers used it correctly, consistently, and thoroughly throughout their 
apps. What do I tell you about ESAPI and your MVC frameworks (Point #3 from 
above)? -sigh- That's a longer discussion. And, by all means, don't think you 
can let your guard down on your pen-testing. Is it a silver bullet? No. 

Is ESAPI the only approach? No. I submit that it's -A- way. I hope this email 
outlines that effectively. And viewed from a knowledgeable but separate 
perspective: the ESAPI approach has pluses and minuses just like all the 
others. 
 

John Steven
Senior Director; Advanced Technology Consulting
Desk: 703.404.9293 x1204 Cell: 703.727.4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven
http://www.cigital.com
Software Confidence. Achieved.
 
(*1) http://bsi-mm.com/ssf/intelligence/sfd/?s=sfd1.1#sfd1.1
(*2) During the AppSecDC summit, Jeff indicated the ESAPI project would later 
pilot SAMM but the global projects committee indicated that getting OWASP 
projects to follow some secure development touchpoints is too 
onerous/impossible. Dinis, I'll note is a huge proponent of adherence.


On Jan 6, 2010, at 4:36 PM, James Manico wrote:

 Hello Matt,
 
 Java EE still has NO support for escaping and lots of other important 
 security areas. You need something like OWASP ESAPI to make a secure app even 
 remotely possible. I was once a Sun guy, and I'm very fond of Java and Sun. 
 But JavaEE 6 does very little to raise the bar when it comes to Application 
 Security.
 
 - Jim
 
 On Tue, Jan 5, 2010 at 3:30 PM, Matt Parsons mparsons1...@gmail.com wrote:
 From what I read it appears that this Java EE 6 could be a few rule
 changers.   It looks like to me, java is checking for authorization and
 authentication with this new framework.   If that is the case, I think that
 static code analyzers could change their rule sets to check what normally is
 a manual process in the code review of authentication and authorization.
 Am I correct on my assumption?
 
 Thanks,
 Matt
 
 
 Matt Parsons, MSM, CISSP
 315-559-3588 Blackberry
 817-294-3789 Home office
 mailto:mparsons1...@gmail.com
 http://www.parsonsisconsulting.com
 http://www.o2-ounceopen.com/o2-power-users/
 http://www.linkedin.com/in/parsonsconsulting
 
 
 
 
 
 
 -Original Message-
 From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org]
 On Behalf Of Kenneth Van Wyk
 Sent: Tuesday, January 05, 2010 8:59 AM
 To: Secure Coding
 Subject: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security
 made simple ! | Core Security Patterns Weblog
 
 Happy new year SC-Lers.
 
 FYI, interesting blog post on some of the new security features in Java EE
 6, by Ramesh Nagappan.  Worth reading for all you Java folk, IMHO.
 
 http://www.coresecuritypatterns.com/blogs/?p=1622
 
 
 Cheers,
 
 Ken
 
 -
 Kenneth R. van Wyk
 SC-L Moderator
 
 
 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___
 
 
 
 -- 
 -- 
 Jim Manico, Application Security Architect
 jim.man...@aspectsecurity.com | j...@manico.net
 (301) 604-4882 (work)
 (808) 652-3805 (cell)
 
 Aspect Security™
 Securing your applications at the source
 http://www.aspectsecurity.com
 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___





smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-07 Thread John Steven
Jim,

Yours was the predicted response. The ref-impl. to API side-step does not fix 
the flaw in the argument though.

No, you do not need A ESAPI to build secure apps. 

Please re-read my email carefully. 

Alternatives:
1) Some organizations adopt OWASP ESAPI's ref-impl.
2) Others build their own do agree and see the value; yes

#1 and #2 agree with your position.

3) Some secure their toolkits (again, a la secure struts)

Indicating such a secure struts is an organization's ESAPI perverts the ESAPI 
concept far too greatly to pass muster. Indeed, were it to, it would violate 
properties 3 and 4 (and very likely 2) within my previous email's advantage 
list. 

Mr. Boberski, you too need to re-read my email. I advise you strongly not to 
keep saying that ESAPI is like PK-enabling an APP. I don't think many people 
got a good feeling about how much they spent on, or how effective their PKI 
implementation was ;-). Please consider how you'd ESAPI-enable the millions of 
lines of underlying framework code beneath the app.

4) Policy + Standards, buttressed with a robust assurance program

Some organizations have sufficiently different threat models and deployment 
scenarios within their 'four walls' that they opt for specifying an overarching 
policy and checking each sub-organization's compliance--commensurate with their 
risk tolerance and each app deployment's threat model. Each sub-organization 
may-or-may-not choose to leverage items one and two from this list. I doubt, 
however, you'd argue that more formal methods of verification don't suffice to 
perform 'as well' as ESAPI in securing an app (BTW, I have seen commercial 
implementations opt for such verification as an alternative to a security 
toolkit approach). Indeed, an single security API would likely prove a 
disservice if crammed down the throats of sub-organizations that differ too 
greatly.

At best, the implicit ESAPI or the highway campaign slogan  applies to only 
50% of the alternatives I've listed. And since the ESAPI project doesn't have 
documented and publicly available good, specific, actionable requirements, 
mis-use cases, or a threat model from which it's working, the OWASP ESAPI 
project doesn't do as much as it could for the #2 option above.

Jim, Mike, I see your posts all-througout the the blog-o-sphere and mailing 
lists. Two-line posts demanding people adopt ESAPI or forgo all hope can 
off-put. It conjures close-minded religion to me. Rather:

* Consider all four of the options above, one might be better than OWASP ESAPI 
within the context of the post
* Consider my paragraph following Point #4. Create:

* An ESAPI mis-use case guide, back out security policy it manifests, 
  or requirements it implements (and don't point me to the unit 
  tests--I've read them)
* Document an ESAPI threat model (For which apps will developers have
  their expectations met adopting ESAPI? Which won't?)
* A document describing experiment results: before and after ESAPI: 
  how many results does a pen-test find?, 'Code review?
* Write an adoption guide. Apps are only created in a green-field
  once. Then they live in maintenance forever. How do you apply 
  ESAPI to a real-world app already in production without 
risk/regression?

* Generate an argument as to why ESAPI beats these alternatives. Is it cost? 
Speed-to-market? What?
* Finally, realize that it's OK that there's more than one way to do things. 
Revel in it. It's what makes software an exciting field. 

In the meantime, rest assured that those of us out there that have looked get 
that ESAPI can be a good thing.


John Steven
Senior Director; Advanced Technology Consulting
Desk: 703.404.9293 x1204 Cell: 703.727.4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven
http://www.cigital.com
Software Confidence. Achieved.

On Jan 7, 2010, at 10:56 AM, Jim Manico wrote:

 John,
 
 You do not need OWASP ESAPI to secure an app. But you need A ESAPI  
 for your organization in order to build secure Apps, in my opinion.  
 OWASP ESAPI may help you get started down that path.
 
 An ESAPI is no silver bullet, there is no such thing as that in  
 AppSec. But it will help you build secure apps.
 
 Jim Manico
 
 On Jan 6, 2010, at 6:20 PM, John Steven jste...@cigital.com wrote:
 
 All,
 
 With due respect to those who work on ESAPI, Jim included, ESAPI is  
 not the only way to make a secure app even remotely possible. And  
 I believe that underneath their own pride in what they've done--some  
 of which is very warranted--they understand that. It's hard not to  
 become impassioned in posting.
 
 I've seen plenty of good secure implementations within  
 organizations' own security toolkits. I'm not the only one that's  
 noticed: the BSIMM SSF calls out three relevant activities to this  
 end:
 
 SDF

[SC-L] Static Vs. Binary

2009-07-30 Thread John Steven
Something occurred to me last night as I pondered where this discussion¹s
tendrils are taking us.

An point I only made implicitly is this: The questionfor yearshas been
³conduct your SA on source code or binary?². You can see that there are
interesting subtleties in even those languages that target intermediate
representational formats (like Java and the .NET family of languages that
compiles to MSIL). The garbage-collection-optimization problems that plague
those asking ³How do I assure password String cleanup in Java² are of the
same ilk as the gcc optimizations that trouble the C/C++ realm.

Yes, this question is still pertinent. It _is_ interesting to those looking
for thorough/sound analysis to consider fidelity and resolution at this
level. People are beginning to echo what I've been saying for years, this
problem extends beyond the initial compile into the runtime optimizations
and runtime compilers. My previous post reiterates that there's a lot more
to it than most people consider.

I think I allowed that clarification to muddle my more strategic point:

   -
Whereas THE question used to be source code vs. binary representation,
the question is NOW: What set of IOC-container/XML combos,
aspect weaver results, method/class-level annotations, and other such
tomfoolery governs the execution of my application beyond what the
compiler initially output?
   -

As Fortify, Veracode, and others punch out this 'static analysis on binaries
via SAAS' battle, they and the organizations they serve would do well to
keep this question in mind... Or risk the same failures that the current
crop of parser-based static-analysis tools face against dynamic approaches.

-jOHN

On 7/29/09 8:44 AM, John Steven jste...@cigital.com wrote:

 All,
 
 The question of ³Is my answer going to be high-enough resolution to support
 manual review?² or ³...to support a developer fixing the problem?² comes down
 to ³it depends².  And, as we all know, I simply can¹t resist an ³it depends²
 kind of subtlety.
 
 Yes, Jim, if you¹re doing a pure JavaSE application, and you don¹t care about
 non-standards compilers (jikes, gcj, etc.), then the source and the binary are
 largely equivalent (at least in terms of resolution) Larry mentioned gcj.
 Ease of parsing, however, is a different story (for instance, actual
 dependencies are way easier to pull out of a binary than the source code,
 whereas stack-local variable names are easiest in source).
 
 Where you care about ³a whole web application² rather than a pure-Java module,
 you have to concern yourself with JSP and all the other MVC technologies.
 Placing aside the topic of XML-based configuration files, you¹ll want to know
 what (container) your JSPs were compiled to target. In this case, source code
 is different than binary. Similar factors sneak themselves in across the Java
 platform. 
 
 Then you¹ve got the world of Aspect Oriented programming. Spring and a broader
 class of packages that use AspectJ to weave code into your application will
 dramatically change the face of your binary. To get the same resolution out of
 your source code, you must in essence Oapply¹ those point cuts yourself...
 Getting binary-quality resolution from source code  therefore means predicting
 what transforms will occur at what point-cut locations. I doubt highly any
 source-based approach will get this thoroughly correct.
 
 Finally, from the perspective of dynamic analysis, one must consider the
 post-compiler transforms that occur. Java involves both JIT and Hotspot (using
 two hotspot compilers: client and server, each of which conducting different
 transforms), which neither binary nor source-code-based static analysis are
 likely to correctly predict or account for. The binary image that runs is
 simply not that which is fed to classloader.defineClass[] as a bytestream.
 
 ...and  (actually) finally, one of my favorite code-review techniques is to
 ask for both a .war/ear/jar file AND the source code. This almost invariable
 get¹s a double-take, but it¹s worth the trouble. How many times do you think a
 web.xml match between the two? What exposure might you report if they were
 identical? ... What might you test for If they¹re dramatically different?
 
 Ah... Good times,
  
 John Steven 
 Senior Director; Advanced Technology Consulting
 Direct: (703) 404-5726 Cell: (703) 727-4034
 Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908
 
 Blog: http://www.cigital.com/justiceleague
 Papers: http://www.cigital.com/papers/jsteven
 
 http://www.cigital.com
 Software Confidence. Achieved.
 
 
 On 7/28/09 4:36 PM, ljknews ljkn...@mac.com wrote:
 
 At 8:39 AM -1000 7/28/09, Jim Manico wrote:
 
 A quick note, in the Java world (obfuscation aside), the source and
 binary is really the same thing. The fact that Fortify analizes
 source and Veracode analizes class files is a fairly minor detail.
 
 It seems to me that would only

Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist (informIT)

2009-03-19 Thread John Steven
Steve,

You saw my talk at the OWASP assurance day. There was a brief diversion about 
the number of business logic problems and design flaws (coarsely lumped 
together in my chart). That 'weight' should indicate that-at least in the 
subset of clients I deal with-flaws aren't getting short-shrift.

http://www.owasp.org/images/9/9e/Maturing_Assessment_through_SA.ppt (for those 
who didn't see it)

You may also want to look at my OWASP NoVA chapter presentation on why we 
believe Top N lists are bad... It's not so much a rant as it is a set of 
limitations in ONLY taking at Top N approach, and a set of constructive steps 
forward to improve one's practices:

http://www.owasp.org/images/d/df/Moving_Beyond_Top_N_Lists.ppt.zip

I cover how one should cause their own organization-specific Top N list to 
emerge and how to manage it once it does.


John Steven
Senior Director; Advanced Technology Consulting
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.




On 3/18/09 6:14 PM, Steven M. Christey co...@linus.mitre.org wrote:



On Wed, 18 Mar 2009, Gary McGraw wrote:

 Many of the top N lists we encountered were developed through the
 consistent use of static analysis tools.

Interesting.  Does this mean that their top N lists are less likely to
include design flaws?  (though they would be covered under various other
BSIMM activities).

 After looking at millions of lines of code (sometimes constantly), a
 ***real*** top N list of bugs emerges for an organization.  Eradicating
 number one is an obvious priority.  Training can help.  New number
 one...lather, rinse, repeat.

I believe this is reflected in public CVE data.  Take a look at the bugs
that are being reported for, say, Microsoft or major Linux vendors or most
any product with a long history, and their current number 1's are not the
same as the number 1's of the past.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Language agnostic secure coding guidelines/standards?

2008-11-13 Thread John Steven
All,

James McGovern hits the core issue with his post, though I'm not sure how many 
organizations are self-aware enough to realize it. In practice, his 
philosophical quandary plays out through a few key questions. Do I:

1) Write technology-specific best-practices or security policy?
2) Couch my standards as do not or do?
3) Cull best practices from what people do, or set a bar and drive people 
towards compliance?
4) Spend money on training, or a tool roll-out?

See:
http://risiko.cigital.com/justiceleague/2007/05/25/a-mini-architecture-for-security-guidance/
http://risiko.cigital.com/justiceleague/2007/05/21/how-to-write-good-security-guidance/
http://risiko.cigital.com/justiceleague/2007/05/18/security-guidance-and-its-%e2%80%9cspecificity-knob%e2%80%9d/

Though old, these posts still seem to help.

More recently, this argument has most frequently taken the form of language 
specific guidance or agnostic security guidance?. this has begun to play out 
in Andrew's post quoted below. Though there's tremendous value in agnostic 
guidance (especially because it applies well to languages for which specific 
guidance or tool support doesn't yet exist, and because it withstands time's 
test slightly better). But, what OWASP has documented is a false victory for 
the proponents of agnostic guidance--citing  its language independence. It, 
like any decent guidance, IS technology-specific, just not on any particular 
language. It's closely coupled to both the current web-technology stack as well 
as a penetration-testing approach (though, frankly that is fine). Move outside 
of either and you're going to find the guidance wanting. Saying the OWASP 
guidance is better than language-specific guidance is like getting caught in 
the rabbit hole of Java's single language compiled to a virtual !
 machine that runs anywhere vs. .NETs many languages compiled to a single 
format that runs one place.

High-minded thought about whether or not one should proceed from the top down 
(from a strong but impractical to apply) governance initiative or from the 
bottom-up from a base of core scanning capabilities afforded by a security tool 
has won me little progress. it's frustrating and I give up. We needed a 
breakthrough, and we've gotten it:

As a result, we've built a tool chain that allows us/our clients to rapidly 
implement automated checks whether they have a static analysis tool, rely on 
penetration testing, or desire to implement their security testing as part of a 
broader QA effort. The 'rub' is that we've stayed technology-specific (to the 
Java EE platform)--so all the appropriate limitations apply... but recently we 
were able to deploy the static analysis piece of this puzzle (which we call our 
Assessment Factory) and automate 55% of a corporation's (rather extensive) 
security standards for that stack in 12mhrs. That's ridiculous (in a good way).

So, in my mind, the key is to get specific and do it quickly. Deciding whether 
or not to get language or technology-stack specific is a red-herring argument. 
The question should be: are you going to implement your automation with dynamic 
testing tools, static analysis tools, or say, a requirements management tool 
such as Archer.

If you're going the dynamic route, focus on technology-specific guidance. 
Download the OWASP security testing guide. Conduct a gap analysis on the guide: 
what can you automate with your existing test harness? If you don't have a 
harness, download Selenium. Once the gap analysis is done: get to work 
automating iteratively.

If you're going the static route: focus on language-specific guidance. Begin 
customizing your tool to find vulnerable constructs in your architectural 
idiom, and to detect non-compliance to your corporate standards/policy.

It's really not as bad as it can seem. You just have to remember you won't 
achieve 100% coverage in the first month. Though, any seasoned QA professional 
will tell you--expecting to is ludicrous.


John Steven
Senior Director; Advanced Technology Consulting
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Andrew van der Stock

The OWASP materials are fairly language neutral. The closest document
to your current requirements is the Developer Guide.

I am also developing a coding standard for Owasp with a likely
deliverable date next year. I am looking for volunteers to help with
it, so if you want a document that exactly meets your needs ... Please
join us!

On Nov 12, 2008, at 19:21, Pete Werner [EMAIL PROTECTED] wrote:

 Hi all

 I've been tasked with developing a secure coding standard for my
 employer. This will be a policy tool used to get developers to fix
 issues in their code

Re: [SC-L] Really dumb questions?

2007-08-30 Thread John Steven
James,

Not dumb questions: an unfortunate situation. I do tool bakeoffs for clients a 
fair amount. I'm responsible for the rules Cigital initially sold to Fortify. I 
also attempt to work closely with companies like Coverity and understand deeply 
the underpinnings of that tool's engine. I've a fair amount of experience with 
Klocwork, unfortunately less with Ounce.

I understand the situation like this: technical guys at each of these companies 
are all great guys, smart, and understand their tool's capabilities and focus. 
They accurately describe what their tool does and don't misrepresent it.

On the other hand, I've experienced competition bashing in the sales process as 
I've helped companies with tool selections and bake offs. I see NO value in 
this. As I said in a previous post to this list: the tools differ both 
macroscopically in terms of approach and microscopically in terms rule 
implementation. Please see my previous post about bake-offs and such if you'd 
like more information on how to disambiguate tool capabilities objectively.

No blanket statement about quality or security fits any vendors' tool; ANY 
vendor. Ignore this level of commentary by the vendors.*(1)

No boolean answer exists to your question, let me give you some of my 
experiences:


 *   Fortify's SCA possesses far-and-away the largest rule set, covering both 
topics people consider purely security and those that may-or-may-not create 
opportunity for exploit (often when combined with other factors) which one may 
call quality rules. My impression is that SCA can be effectively used by 
Security Folk, QA Folk, or developers with a mind to improve the quality or 
security of their code. Recent inclusion of Findbugs bolsters SCA's 
capabilities to give code quality commentary.


 *   Coverity's Prevent often gets pigeon-holed as a quality tool, but does 
an exceptional job of tracking down memory issues in C, C++. Skilled security 
consultants will tell you that failing to fix Prevent's results in your code 
will result in various memory-based command injection opportunities (BO, format 
string, write-anywhere's, etc.). It also effectively targets time-and-state 
issues, as well as other classes of bug. Prevent can effectively be used by 
Security Folk and Developers (or your rare hardcore QA person) to improve code 
quality and squelch opportunity for exploit.


 *   Klocwork's tool targets rule spaces similar to Fortify, but possesses 
less. Often pegged as a quality tool (as well), do find its UI (more than its 
engine) possess helpful features that only a QA professional would enjoy. This 
includes its defect density calculation, reverse engineering capabilities, 
and its reporting/time-series style. Klocwork can be effectively used by a 
Security guy to find security bugs, but I believe Fortify and Ounce have 
widened the rules gap in recent years.

Tackling your other questions in rapid succession:

There is no difference, technically, between the ability to scan for quality or 
security. However, each engine focuses on parsing and keeping track of only 
that state which provides meaningful data to their rules. You can imagine that 
Fortify carries a fair amount of information about where data came from and 
what functions may be dangerous and can therefore support new security rules 
easily. They don't carry around information to aggregate defect density readily 
like K7 can. Does this make one intrinsically better than the other for quality 
or security? Perhaps having worked on static analysis tools I'm cranky but I 
say, No. If the market clearly mandated something specifically, all the 
vendors would augment their engine to support it. Some would be in a better 
position to offer it than others.

When I talk to vendors about COBOL and similar support they shudder. I think 
this space represents a huge opportunity for the first vendor to support it, 
but as a commercial organization, I wouldn't hold your breath on near-term 
support.

I could answer how these tools support new languages, but that doesn't seem 
like public domain knowledge. I'll let the vendors tackle that 'un.


John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


*(1) I'm also explicitly dodging the quality vs. security debate here. Having 
read/posted to this list for the last 7 years, that semi-annual topic has been 
flogged more than your average dead equine.



From: McGovern, James F (HTSC, IT) [EMAIL PROTECTED]

Most recently, we have met with a variety of vendors including but not
limited to: Coverity, Ounce Labs, Fortify, Klocwork, HP and so on. In
the conversation they all used interesting phrases to describe they
classify

[SC-L] Technology-specific Security Standards

2007-05-23 Thread John Steven
All,

My last two posts to Cigital's blog covered whether or not to build your
security standards specific to a technology-stack and code-centric or to be
more general about them:

http://www.cigital.com/justiceleague/2007/05/18/security-guidance-and-its-%e
2%80%9cspecificity-knob%e2%80%9d/

And

http://www.cigital.com/justiceleague/2007/05/21/how-to-write-good-security-g
uidance/

Dave posted a comment on the topic, which I'm quoting here:
-
Your point about the ³perishability² of such prescriptive checklists does
make the adoption of such a program fairly high maintenance. Nothing wrong
with that, but expectations should be set early that this would not be a
fire and forget type of program, but rather an ongoing investment.
-

I agree, specifying guidance at this level does take a lot more effort; you
get what you pay for eh? I responded in turn with a comment of my own. I've
seen some organizations control this cost effectively and still get value:

See:
http://www.cigital.com/justiceleague/2007/05/18/security-guidance-and-its-%e
2%80%9cspecificity-knob%e2%80%9d/#comment-1048

Some people think my stand controversial...

What do you guys think?


John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] How is secure coding sold within enterprises?

2007-03-19 Thread John Steven
 attach a PPT ;)


John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F

Blog: http://www.cigital.com/justiceleague
http://www.cigital.com
Software Confidence. Achieved.


On Mar 19, 2007, at 4:12 PM, McGovern, James F ((HTSC, IT)) wrote:

I agree with your assessment of how things are sold at a high-level  
but still struggling in that it takes more than just graphicalizing  
of your points to sell, hence I am still attempting to figure out a  
way to get my hands on some PPT that are used internal to  
enterprises prior to consulting engagements and I think a better  
answer will emerge. PPT may provide a sense of budget, timelines,  
roles and responsibilities, who needed to buy-in, industry metrics,  
quotes from noted industry analysts, etc that will help shortcut my  
own work so I can start moving towards the more important stuff.

-Original Message-
From: Andrew van der Stock [mailto:[EMAIL PROTECTED]
Sent: Monday, March 19, 2007 2:50 PM
To: McGovern, James F (HTSC, IT)
Cc: SC-L
Subject: Re: [SC-L] How is secure coding sold within enterprises?

There are two major methods:

Opportunity cost / competitive advantage (the Microsoft model)
Recovery cost reductions (the model used by most financial  
institutions)


Generally, opportunity cost is where an organization can further  
its goals by a secure business foundation. This requires the CIO/ 
CSO to be able to sell the business on this model, which is hard  
when it is clear that many businesses have been founded on insecure  
foundations and do quite well nonetheless. Companies that choose to  
be secure have a competitive advantage, an advantage that will  
increase over time and will win conquest customers. For example  
(and this is my humble opinion), Oracle’s security is a long  
standing unbreakable joke, and in the meantime MS ploughed billions  
into fixing their tattered reputation by making it a competitive  
advantage, and thus making their market dominance nearly complete.  
Oracle is now paying for their CSO’s mistake in not understanding  
this model earlier. Forward looking financial institutions are now  
using this model, such as my old bank’s (with its SMS transaction  
authentication feature) winning many new customers by not only  
promoting themselves as secure, but doing the right thing and  
investing in essentially eliminating Internet Banking fraud. It  
saves them money, and it works well for customers. This is the best  
model, but the hardest to sell.


The second model is used by most financial institutions. They are  
mature risk managers and understand that a certain level of risk  
must be taken in return for doing business. By choosing to invest  
some of the potential or known losses in reducing the potential for  
massive losses, they can reduce the overall risk present in the  
corporate risk register, which plays well to shareholders. For  
example, if you invest $1m in securing a cheque clearance process  
worth (say) $10b annually to the business, and that reduces check  
fraud by $5m per year and eliminates $2m of unnecessary overhead  
every year, security is an easy sell with obvious targets to  
improve profitability. A well managed operational risk group will  
easily identify the riskiest aspects of a mature company’s  
activities, and it’s easy to justify improvements in those areas.


The FUD model (used by many vendors - “do this or the SOX boogeyman  
will get you”) does not work.


The do nothing model (used by nearly everyone who doesn’t fall into  
the first two categories) works for a time, but can spectacularly  
end a business. Card Systems anyone? Unknown risk is too risky a  
proposition, and is plain director negligence in my view.


Thanks,
Andrew


On 3/19/07 11:35 AM, McGovern, James F (HTSC, IT)  
[EMAIL PROTECTED] wrote:


I am attempting to figure out how other Fortune enterprises have  
went about selling the need for secure coding practices and can't  
seem to find the answer I seek. Essentially, I have discovered that  
one of a few scenarios exist (a) the leadership chain was highly  
technical and intuitively understood the need (b) the primary  
business model of the enterprise is either banking, investments,  
etc where the risk is perceived higher if it is not performed (c)  
it was strongly encouraged by a member of a very large consulting  
firm (e.g. McKinsey, Accenture, etc).


I would like to understand what does the Powerpoint deck that  
employees of Fortune enterprises use to sell the concept PRIOR to  
bringing in consultants and vendors to help them fulfill the need.  
Has anyone ran across any PPT that best outlines this for  
demographics where the need is real but considered less important  
than other intiatives?







This electronic message transmission contains

Re: [SC-L] Code Analysis Tool Bakeoff

2007-01-08 Thread John Steven
I think Gunnar hit a lot of the important points. Bake offs do  
provide interesting data. I have a few slide decks which I've created  
to help companies with this problem, and would be happy to provide  
them to anyone willing to email me side-channel. Of the items Gunnar  
listed, I find that baking off tools helps organizations understand  
where they're going to have to apply horsepower and money.

For instance, companies that purchase Coverity's Prevent seem to have  
little trouble getting penetration into their dev. teams, even beyond  
initial pilot.  Model tuning provides breeze-easy ability to keep  
'mostly effective' rules in play and still reduce false positives.  
However, with that ease of adoption and developer-driven results  
interpretation, orgs. buy some inflexibility in terms of later  
extensibility. Java support, now only in beta, lacks sorely and the  
mechanisms by which one writes custom checkers poses a stiff learning  
curve. Whereas, when one adopts Fortify's sourceAnalyzer, developer  
penetration will be _the_ problem unless the piloting team bakes a  
lot of rule tuning into the product's configuration and results  
pruning into the usable model prior to role out. However, later  
customization seems easiest of any of the tools I'm familiar with.  
Language and rules coverage seems, at the macro-level, consistently  
the most robust.

In contrast, it takes real experience to illuminate each tool's  
difference in the accuracy department. Only a bakeoff that contains  
_your_ organization's code can help cut through the fog of what each  
vendor's account manager will promise. The reason seems to be that  
the way a lot of these tools behave relative to each other  
(especially Prexis, K7, and Source Analyzer) depends greatly on  
minute details of how they implemented rules. However, at the end of  
the day, their technologies remain shockingly similar (at least as  
compared to products from Coverity, Secure Software, or Microsoft's  
internal Prefix).

For instance, in one bake off, we found that (with particular open  
source C code) Fortify's tool found more unique instances of  
overflows on stack-based, locally declared buffers, with offending  
locally declared length-specifiers. However, Klocwork's tool was  
profoundly more accurate in cases in which the overflow had similar  
properties but represented an 'off by one' error within a buffer  
declared as a fixed length array.

Discussing tradeoffs in tool implementation at this level leads  
bakers down a bevy of rabbit holes. Looking at them to the extent  
Cigital does, for deep understanding of our clients' code and how  
_exactly_ the tool is helping/hurting us isn't _your_ goal. But, by  
collecting data on 7 figures of your own code base, you can start to  
see what trends in your programmers' coding practices play to which  
tools. This, can in fact, help you make a better tool choice.


John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F
http://www.cigital.com
Software Confidence. Achieved.


On Jan 6, 2007, at 11:27 AM, Gunnar Peterson wrote:

 1. I haven't gotten a sense that a bakeoff matters. For example,  
 if I wanted
 to write a simple JSP application, it really doesn't matter if I  
 use Tomcat,
 Jetty, Resin or BEA from a functionality perspective while they  
 may each have
 stuff that others don't, at the end of the day they are all good  
 enough. So is
 there really that much difference in comparing say Fortify to  
 OunceLabs or
 whatever other tools in this space exist vs simply choosing which  
 ever one
 wants to cut me the best deal (e.g. site license for $99 a year :-) ?


 I recommend that companies do a bakeoff to determine

 1. ease of integration with dev process - everyone's dev/build  
 process is
 slightly different

 2. signal to noise ratio - is the tool finding high priority/high  
 impact
 bugs?

 3.  remediation guidance - finding is great, fixing is better, how
 actionable and relevant is the remediation guidance?

 4. extensibility - say you have a particular interface, like mq  
 series for
 example, which has homegrown authN and authZ foo that you want to  
 use the
 static analysis to determine if it is used correctly. How easy is it
 build/check/enfore these rules?

 5. roles - how easy is it to separate out roles/reports/ 
 functionaility like
 developer, ant jockey, and auditor?

 6. software architecture span - your high risk/high priority apps are
 probably multi-tier w/ lots of integration points, how much  
 visibility to
 how many integration points and tiers does the static analysis tool  
 allow
 you to see? How easy is it to correlate across tiers and interfaces?





This electronic message transmission contains information that may be
confidential or privileged

Re: [SC-L] Ajax one panel

2006-05-22 Thread John Steven

Johan,

Yes, the attacks are feasible. Please refer to the Java language  
spec. on inner/outer class semantics and fool around with simple test  
cases (and javap -c) to show yourself what's happening during the  
compile step.


Attacks require getting code inside the victim VM but mine pass  
verification silently (even with verifier turned on). Calling the  
privileged class to lure it into doing your bidding requires only an  
open package (not signed and sealed -- again see spec.) and other fun- 
and-excitement can be had if the Developer hasn't been careful enough  
to define the PriviledgedAction subclass as an explicit top-level  
class and they've passed information to-and-fro using the inner class  
syntactic sugar--rather than explicit method calls defined pre- 
compile time.



John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F
http://www.cigital.com
Software Confidence. Achieved.


On May 21, 2006, at 8:23 AM, Johan Peeters wrote:

That sounds like a very exciting idea, but I am not sure about the  
mechanics of getting that to work. I assume the permissions for the  
untrusted code would be in the closure's environment. Who would put  
them there? How would the untrusted code call privileged code?

Has anyone done this?

kr,

Yo

Gary McGraw wrote:

Hi yo!
Closure is very helpful when doing things like crossing trust  
boundaries.  If you look at the craziness involved in properly  
invoking the doprivilege() stuff in java2, the need for closure is  
strikingly obvious.
However, closure itself is not as important as type safety is.
So the fact that javascript may (or may not) have closure fails in  
comparison to the fact that it is not type safe.

Ajax is a disaster from a security perspective.
gem
 -Original Message-
From:   Johan Peeters [mailto:[EMAIL PROTECTED]
Sent:   Sat May 20 15:44:46 2006
To: Gary McGraw
Cc: Mailing List, Secure Coding; SSG
Subject:Re: [SC-L] Ajax one panel
I think Java would have been a better language with closures, but  
I am intrigued that you raise them here. Do you think closures  
present security benefits? Or is this a veiled reference to Ajax?  
I guess JavaScript has closures.

kr,
Yo
Gary McGraw wrote:
Ok...it was java one.  But it seemed like ajax one on the show  
floor.   I participated in a panel yesterday with superstar bill  
joy.  I had a chance to talk to bill for a while after the gig  
and asked him why java did not have closure.  Bill said he was on  
a committee of five, and got out-voted 2 to 3 on that one (and  
some other stuff too).  You know the other pro vote had to be guy  
steele.  Most interesting.  Tyranny of the majority even in java.


Btw, bill also said they tried twice to build an OS on java and  
failed both times.  We both agree that a type safe OS will happen  
one day.


Here's a blog entry from john waters that describes the panel  
from his point of view.


http://www.adtmag.com/blogs/blog.aspx?a=18564

gem
www.cigital.com
www.swsec.com


Sent from my treo.


 

This electronic message transmission contains information that  
may be
confidential or privileged.  The information contained herein is  
intended
solely for the recipient and use by any other party is not  
authorized.  If
you are not the intended recipient (or otherwise authorized to  
receive this
message by the intended recipient), any disclosure, copying,  
distribution or
use of the contents of the information is prohibited.  If you  
have received
this electronic message transmission in error, please contact the  
sender by
reply email and delete all copies of this message.  Cigital, Inc.  
accepts no
responsibility for any loss or damage resulting directly or  
indirectly from

the use of this email or its contents.
Thank You.
 



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/ 
listinfo/sc-l
List charter available at - http://www.securecoding.org/list/ 
charter.php





--
Johan Peeters
program director
http://www.secappdev.org
+32 16 649000





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply

Re: [SC-L] Bugs and flaws

2006-02-03 Thread John Steven
Ah,

The age-old Gary vs. jOHN debate. I do believe along the continuum of
architecture--design--impl. that I've shown the ability to discern flawed
design from source code in source code reviews.

Cigital guys reading this thread have an advantage in that they know both
the shared and exclusive activities defined as part of our architectural and
code review processes. The bottom line is this: as you look at source code,
given enough gift for architecture, you can identify _some_ of the design
(whether intended or implemented) from the implementation, and find _some_
flaws. Before you get wound up and say, Maybe you jOHN tongue fully
in-cheek, the Struts example I gave is one case. Looking at a single class
file (the privileged Servlet definition), you can determine that the Lead
Developer/Architect has not paid enough attention to authorization when
he/she designed how the application's functionality was organized.
Admittedly, _some_ (other) architectural flaws do demand attention paid only
through activities confined to architectural analysis--not code review.
 
Think back again to my original email. The situations I present (both with
the physical table and Struts) present a 'mistake' (IEEE parlance) that can
manifest itself in terms of both an architectural flaw and implementation
bug (Cigital parlance).

I believe that the concept that Jeff (Payne), Cowan, Wysopal, and even
Peterson (if you bend it correctly) present is that the 'mistake' may
cross-cut the SDLC--manifesting itself in each of the phases' artifacts. IE:
If the mistake was in requirements, it will manifest itself in design
deficiency (flaw), as well as in the implementation (bug).

Jeff (Williams) indicates that, since progress roles downstream in the SDLC,
you _could_ fix the 'mistake' in any of the phases it manifests itself, but
that an efficiency argument demands you look in the code. I implore the
reader recall my original email. I mention that when characterized as a bug,
the level of effort required to fix the 'mistake' is probably less than if
it's characterized as a flaw. However, in doing so, you may miss other
instances of the mistake throughout the code.

I whole-heartedly agree with Jeff (Williams) that:

1) Look to the docs. for the 'right' answer.
2) Look to the code for the 'truth'.
3) Look to the deployed bins. for 'God's Truth'.
 
The variance in these artifacts is a key element in Cigital's architectural
analysis.

Second, (a point I made in my original email) the objective is to give the
most practical advise as possible to developers for fixing the problem. I'll
just copy-paste it from the original:
-
Summarizing, my characterization of a vulnerability as a bug or a flaw has
important implications towards how it's mitigated. In the case of the Struts
example, the bug-based fix is easiest--but in so characterizing the problem
I may (or may not) miss other instances of this vulnerability within the
application's code base.

How do I know how to characterize a vulnerability along the continuum of
bugs--flaws?  I don't know for sure, but I've taken to using my experience
over a number of assessments to upcast typically endemic problems as flaws
(and solve them in the design or architecture) and downcast those problems
that have glaring quick-fixes. In circumstances where both those heuristics
apply, I suggest a tactical fix to the bug, while prescribing that further
analysis take the tack of further fleshing out the flaw.
-

Where my opinion differs from the other posters is this: I believe:
Where a 'mistake' manifests itself in multiple phases of the software
development lifecycle, you're most apt to completely MITIGATE its effects by
characterizing it as early in the lifecycle as possible, as design or even
requirements. As Williams indicates, to the contrary, you may FIND the
problem most easily later in the lifecycle. Perhaps in the code itself.

Look, 
McGraw put forth the 'bug' and 'flaw' nomenclature. It's useful because
there is value in explicitly pinning the vulnerability in architecture,
design, or code if it helps the dev. org. get things sorted out securely and
throughout their application. My experience is that this value is real.

The message of the  'defect'/'mistake' purist resonates with me as well:
it's all simply a mistake some human made along the path of developing the
application. But! I can assure you, to the extent that root-cause analysis
is valuable, telling a dev. team where to most effectively contend with a
vulnerability is also valuable.

In other words, smart guys will always find the problems--by hook, or by
crook--but it takes classification to aid in efficient and thorough
mitigation.
 
-
John Steven
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908


 From: Gary McGraw [EMAIL PROTECTED]
 
 I'm sorry

[SC-L] The role static analysis tools play in uncovering elements of design

2006-02-03 Thread John Steven
Title: The role static analysis tools play in uncovering elements of design 



Jeff,

An unpopular opinion Ive held is that static analysis tools, while very helpful in finding problems, inhibit a reviewers ability to find collect as much information about the structure, flow, and idiom of codes design as the reviewer might find if he/she spelunks the code manually.

I find it difficult to use tools other than source code navigators (source insight) and scripts to facilitate my code understanding (at the design-level). 

Perhaps you can give some examples of static analysis library/tool use that overcomes my prejudiceor are you referring to the navigator tools as well?

-
John Steven 
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc. | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD 94B0 AE7F EEF4 62D5 F908


snipped
Static analysis tools can help a lot here. Used properly, they can provide
design-level insight into a software baseline. The huge advantage is that
it's correct.

--Jeff 
snipped

This electronic message transmission contains information that may be confidential or privileged.  The information contained herein is intended solely for the recipient and use by any other party is not authorized.  If you are not the intended recipient (or otherwise authorized to receive this message by the intended recipient), any disclosure, copying, distribution or use of the contents of the information is prohibited.  If you have received this electronic message transmission in error, please contact the sender by reply email and delete all copies of this message.  Cigital, Inc. accepts no responsibility for any loss or damage resulting directly or indirectly from the use of this email or its contents.Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-02 Thread John Steven
Kevin,

Jeff Payne and I were talking about this last night. Jeff's position was,
...Or, you could just use the existing quality assurance terminology and
avoid the problem altogether. I agree with you and him; standardizing
terminology is a great start to obviating confusing discussions about what
type of problem the software faces.

Re-reading my post, I realize that it came off as heavy support for
additional terminology. Truth is, we've found that the easiest way to
communicate this concept to our Consultants and Clients here at Cigital has
been to build the two buckets (flaws and bugs).

What I was really trying to present was that Security people could stand to
be a bit more thorough about how they synthesize the results of their
analysis before they communicate the vulnerabilities they've found, and what
mitigating strategies they suggest.

I guess, in my mind, the most important things with regard to classifying
the mistakes software people make that lead to vulnerability (the piety of
vulnerability taxonomies aside) is to support:

1) Selection of the most effective mitigating strategy -and-
2) Root cause analysis that will result in changes in software development
that prevent software folk from making the same mistake again.

-
John Steven
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

 From: Wall, Kevin [EMAIL PROTECTED]
 
 John Steven wrote:
 ...
 2) Flaws are different in important ways bugs when it comes to presentation,
 prioritization, and mitigation. Let's explore by physical analog first.
 
 Crispin Cowan responded:
 I disagree with the word usage. To me, bug and flaw are exactly
 synonyms. The distinction being drawn here is between implementation
 flaws vs. design flaws. You are just creating confusing jargon to
 claim that flaw is somehow more abstract than bug. Flaw ::= defect
 ::= bug. A vulnerability is a special subset of flaws/defects/bugs that
 has the property of being exploitable.
 
 I'm not sure if this will clarify things or further muddy the waters,
 but... partial definitions taken SWEBOK
 (http://www.swebok.org/ironman/pdf/Swebok_Ironman_June_23_%202004.pdf)
 which in turn were taken from the IEEE standard glossary
 (IEEE610.12-90) are:
 + Error: A difference.between a computed result and the correct result
 + Fault: An incorrect step, process, or data definition
   in a computer program
 + Failure: The [incorrect] result of a fault
 + Mistake: A human action that produces an incorrect result
 
 Not all faults are manifested as errors. I can't find an online
 version of the glossary anywhere, and the one I have is about 15-20 years old
 and buried somewhere deep under a score of other rarely used books.
 
 My point is though, until we start with some standard terminology this
 field of information security is never going to mature. I propose that
 we build on the foundational definitions of the IEEE-CS (unless there
 definitions have bugs ;-).
 
 -kevin
 ---
 Kevin W. Wall  Qwest Information Technology, Inc.
 [EMAIL PROTECTED] Phone: 614.215.4788
 The reason you have people breaking into your software all
 over the place is because your software sucks...
  -- Former whitehouse cybersecurity advisor, Richard Clarke,
 at eWeek Security Summit




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-01 Thread John Steven
apply, I suggest a tactical fix to the bug, while prescribing that further
analysis take the tack of further fleshing out the flaw.

Is this at all helpful?


-
John Steven
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908


 From: Crispin Cowan [EMAIL PROTECTED]
 
 Gary McGraw wrote:
 If the WMF vulnerability teaches us anything, it teaches us that we need
 to pay more attention to flaws.
 The flaw in question seems to be validate inputs, i.e. don't just
 trust network input (esp. from an untrusted source) to be well-formed.
 
 Of special importance to the Windows family of platforms seems to be the
 propensity to do security controls based on the file type extension (the
 letters after the dot in the file name, such as .wmf) but to choose the
 application to interpret the data based on some magic file typing based
 on looking at the content.
 
 My favorite ancient form of this flaw: .rtf files are much safer than
 doc files, because the RTF standard does not allow you to attach
 VBscript (where VB stands for Virus Broadcast :) while .doc files
 do. Unfortunately, this safety feature is nearly useless, because if you
 take an infected whatever.doc file, and just *rename* it to whatever.rtf
 and send it, then MS Word will cheerfully open the file for you when you
 double click on the attachment, ignore the mismatch between the file
 extension and the actual file type, and run the fscking VB embedded within.
 
 I am less familiar with the WMF flaw, but it smells like the same thing.
 
 Validate your inputs.
 
 There are automatic tools (taint and equivalent) that will check whether
 you have validated your inputs. But they do *not* check the *quality* of
 your validation of the input. Doing a consistency check on the file name
 extension and the data interpreter type for the file is beyond (most?)
 such checkers.
 
   We spend lots of time talking about
 bugs in software security (witness the perpetual flogging of the buffer
 overflow), but architectural problems are just as important and deserve
 just as much airplay.
   
 IMHO the difference between bugs and architecture is just a
 continuous grey scale of degree.




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Information Security Considerations for Use Case Modeling

2005-06-27 Thread John Steven
 def., and keep
them focused on the user's/system's security goals. Hand off to designers to
get them to sign up to system construction, and only then deal with
constraints. To go further into the particulars here, I'd have to inject a
ton of change management text... So I punt here.

This, IMO, leads to much more intelligent testing than Is SSL enabled?
Check. By specifying requirements that speak to security goals and attack
resistance, you've given testers more wherewithal as to how to stress the
system, as an attacker would.

***Specific Tip: Leave no goal unexplored before beginning to architect. Do
not use architecture definition as a mechanism for exploring software
security goals.

***Specific Tip: Use your goals and high-level security requirements to
excise security mechanisms or expenditure that goes well above-and-beyond
your risks

**Use Risk Analysis and Threat Modeling to Curb Security Requirements
Explosion*
 Just as threat modeling and risk analysis can create security requirements,
they can be used to constrain their unbounded growth as well. Risk analysis
is particularly useful in determining whether or not you have too many (or
onerous) security requirements) initially. Threat modeling, which requires
at least initial design at its core, can help with requirements work during
change management activities.

Purely focusing on their requirements pruning potential, these two
activities allow a development team to prioritize what attacks will possess
the highest impact, and focus on requirements and design to address only
these issues.

So, this is just a sampling from a larger laundry list. But, hopefully it
provides some more guidance to those whose appetites are whet from Gunnar
and Johan's posts.

-
John Steven
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908


 From: Gunnar Peterson [EMAIL PROTECTED]

 When I coach teams on security in the SDLC, I ask them to first see
 what mileage they can get out of existing artifacts, like Use Cases,
 User Stories, and so on. While these artifacts and processes were not
 typically designed with security in mind, there is generally a lot of
 underutilized properties in them that can be exploited by security
 architects and analysts.

 The Use Case model adds traceability to security requirements, but
 just as importantly it allows the team to see not just the static
 requirements, rather you can the requirements in a behavioral flow.
 Since so much of security is protocol based and context sensitive,
 describing the behavioral aspects is important to comprehensibility.

 At the end of exploring existing artifacts, then there needs to be a
 set of security-centric artifacts like threat models, misuse cases,
 et. al. The output, e.g. design decisions, of these security-centric
 models are fed back into the requirements in an iterative fashion.

 Security analysts and architects cannot do all the work that goes
 into secure software development by themselves. There may be a
 handful of security people supporting hundreds of developers. This is
 why we need to educate not just developers on writing secure code,
 but also business analysts on security Use Cases, requirements, etc.
 (the main purpose of my article), testers on how to write/run/read
 security test cases, an so on.






This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.