Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-07 Thread John Steven
 you sucked in an 
external one rather than writing it) applies to your applications' threat model 
and ticks off all the elements of your security policy. Because, having hooked 
it into their apps, teams are going to want a fair amount of exoneration from 
normal processes (Some of which is OK, but a lot can be dangerous). Second, 
please make sure it's actually secure--it will be a fulcrum of your security 
controls' effectiveness. Make sure that assessment program proves your 
developers used it correctly, consistently, and thoroughly throughout their 
apps. What do I tell you about ESAPI and your MVC frameworks (Point #3 from 
above)? -sigh- That's a longer discussion. And, by all means, don't think you 
can let your guard down on your pen-testing. Is it a silver bullet? No. 

Is ESAPI the only approach? No. I submit that it's -A- way. I hope this email 
outlines that effectively. And viewed from a knowledgeable but separate 
perspective: the ESAPI approach has pluses and minuses just like all the 
others. 
 

John Steven
Senior Director; Advanced Technology Consulting
Desk: 703.404.9293 x1204 Cell: 703.727.4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven
http://www.cigital.com
Software Confidence. Achieved.
 
(*1) http://bsi-mm.com/ssf/intelligence/sfd/?s=sfd1.1#sfd1.1
(*2) During the AppSecDC summit, Jeff indicated the ESAPI project would later 
pilot SAMM but the global projects committee indicated that getting OWASP 
projects to follow some secure development touchpoints is too 
onerous/impossible. Dinis, I'll note is a huge proponent of adherence.


On Jan 6, 2010, at 4:36 PM, James Manico wrote:

> Hello Matt,
> 
> Java EE still has NO support for escaping and lots of other important 
> security areas. You need something like OWASP ESAPI to make a secure app even 
> remotely possible. I was once a Sun guy, and I'm very fond of Java and Sun. 
> But JavaEE 6 does very little to raise the bar when it comes to Application 
> Security.
> 
> - Jim
> 
> On Tue, Jan 5, 2010 at 3:30 PM, Matt Parsons  wrote:
> >From what I read it appears that this Java EE 6 could be a few rule
> changers.   It looks like to me, java is checking for authorization and
> authentication with this new framework.   If that is the case, I think that
> static code analyzers could change their rule sets to check what normally is
> a manual process in the code review of authentication and authorization.
> Am I correct on my assumption?
> 
> Thanks,
> Matt
> 
> 
> Matt Parsons, MSM, CISSP
> 315-559-3588 Blackberry
> 817-294-3789 Home office
> mailto:mparsons1...@gmail.com
> http://www.parsonsisconsulting.com
> http://www.o2-ounceopen.com/o2-power-users/
> http://www.linkedin.com/in/parsonsconsulting
> 
> 
> 
> 
> 
> 
> -Original Message-
> From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org]
> On Behalf Of Kenneth Van Wyk
> Sent: Tuesday, January 05, 2010 8:59 AM
> To: Secure Coding
> Subject: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security
> made simple ! | Core Security Patterns Weblog
> 
> Happy new year SC-Lers.
> 
> FYI, interesting blog post on some of the new security features in Java EE
> 6, by Ramesh Nagappan.  Worth reading for all you Java folk, IMHO.
> 
> http://www.coresecuritypatterns.com/blogs/?p=1622
> 
> 
> Cheers,
> 
> Ken
> 
> -
> Kenneth R. van Wyk
> SC-L Moderator
> 
> 
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___
> 
> 
> 
> -- 
> -- 
> Jim Manico, Application Security Architect
> jim.man...@aspectsecurity.com | j...@manico.net
> (301) 604-4882 (work)
> (808) 652-3805 (cell)
> 
> Aspect Security™
> Securing your applications at the source
> http://www.aspectsecurity.com
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___





smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-07 Thread John Steven
Jim,

Yours was the predicted response. The ref-impl. to API side-step does not fix 
the flaw in the argument though.

No, you do not need "A" ESAPI to build secure apps. 

Please re-read my email carefully. 

Alternatives:
1) Some organizations adopt OWASP ESAPI's ref-impl.
2) Others build their own do agree and see the value; yes

#1 and #2 agree with your position.

3) Some secure their toolkits (again, "a la secure struts")

Indicating such a "secure struts" is an organization's ESAPI perverts the ESAPI 
concept far too greatly to pass muster. Indeed, were it to, it would violate 
properties 3 and 4 (and very likely 2) within my previous email's advantage 
list. 

Mr. Boberski, you too need to re-read my email. I advise you strongly not to 
keep saying that ESAPI is "like PK-enabling" an APP. I don't think many people 
got a good feeling about how much they spent on, or how effective their PKI 
implementation was ;-). Please consider how you'd ESAPI-enable the millions of 
lines of underlying framework code beneath the app.

4) Policy + Standards, buttressed with a robust assurance program

Some organizations have sufficiently different threat models and deployment 
scenarios within their 'four walls' that they opt for specifying an overarching 
policy and checking each sub-organization's compliance--commensurate with their 
risk tolerance and each app deployment's threat model. Each sub-organization 
may-or-may-not choose to leverage items one and two from this list. I doubt, 
however, you'd argue that more formal methods of verification don't suffice to 
perform 'as well' as ESAPI in securing an app (BTW, I have seen commercial 
implementations opt for such verification as an alternative to a security 
toolkit approach). Indeed, an single security API would likely prove a 
disservice if crammed down the throats of sub-organizations that differ too 
greatly.

At best, the implicit "ESAPI or the highway" campaign slogan  applies to only 
50% of the alternatives I've listed. And since the ESAPI project doesn't have 
documented and publicly available good, specific, actionable requirements, 
mis-use cases, or a threat model from which it's working, the OWASP ESAPI 
project doesn't do as much as it could for the #2 option above.

Jim, Mike, I see your posts all-througout the the blog-o-sphere and mailing 
lists. Two-line posts demanding people adopt ESAPI or forgo all hope can 
off-put. It conjures close-minded religion to me. Rather:

* Consider all four of the options above, one might be better than OWASP ESAPI 
within the context of the post
* Consider my paragraph following Point #4. Create:

* An ESAPI mis-use case guide, back out security policy it manifests, 
  or requirements it implements (and don't point me to the unit 
  tests--I've read them)
* Document an ESAPI threat model (For which apps will developers have
  their expectations met adopting ESAPI? Which won't?)
* A document describing experiment results: before and after ESAPI: 
  how many results does a pen-test find?, 'Code review?
* Write an adoption guide. Apps are only created in a green-field
  once. Then they live in maintenance forever. How do you apply 
  ESAPI to a real-world app already in production without 
risk/regression?

* Generate an argument as to why ESAPI beats these alternatives. Is it cost? 
Speed-to-market? What?
* Finally, realize that it's OK that there's more than one way to do things. 
Revel in it. It's what makes software an exciting field. 

In the meantime, rest assured that those of us out there that have looked get 
that ESAPI can be a good thing.


John Steven
Senior Director; Advanced Technology Consulting
Desk: 703.404.9293 x1204 Cell: 703.727.4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven
http://www.cigital.com
Software Confidence. Achieved.

On Jan 7, 2010, at 10:56 AM, Jim Manico wrote:

> John,
> 
> You do not need OWASP ESAPI to secure an app. But you need "A" ESAPI  
> for your organization in order to build secure Apps, in my opinion.  
> OWASP ESAPI may help you get started down that path.
> 
> An ESAPI is no silver bullet, there is no such thing as that in  
> AppSec. But it will help you build secure apps.
> 
> Jim Manico
> 
> On Jan 6, 2010, at 6:20 PM, John Steven  wrote:
> 
>> All,
>> 
>> With due respect to those who work on ESAPI, Jim included, ESAPI is  
>> not the only way "to make a secure app even remotely possible." And  
>> I believe that underneath their own pride in what they've done--some  
>> of which is very w

Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-12 Thread John Steven
thon object) to assure 
non-security properties (singleton pattern, monitor-lock, and others) are 
upheld in a very syntactically agreeable manner(*4). 

As a last resort, might I suggest using inheritance and encapsulation to stitch 
together framework-provided cut points and ESAPI code. For instance, one can 
simulate [the dreaded] 'multiple inheritance' of both Struts and ESAPI base 
classes by using the template method pattern within a sub-class of (say) the 
struts-provided class, which, implementing the template method pattern, would 
call security controls (such as validation or the largely vestigial ESAPI 
authentication checks) before handing off to end-application developer code 
that handles other controller functionality/business logic.

Personally, for me, the strategy of tacking ESAPI calls onto a developer's 
application code manually on a case-by-case basis without techniques described 
above is bound for failure. Developers simply won't be able to reach the total 
consistency required for robust defense in a large existing application. If 
you're going to walk this road though for the love of God please deploy SAST to 
make sure that something is sweeping through and looking for that ever-elusive 
consistency of application I describe.  
 
> And this is not just a wild idea, I'm lucky to witness some of the
> largest institutions on the planet sucessfully implement ESAPI in the
> real world.
> 
> And sure, you can build a new secure app without an ESAPI. But libs
> like OWASP ESAPI will get you there faster and cheaper.

I'd be very-much interested in data regarding faster and cheaper. With the 
exception of the input validation, canonicalization, and related functionality 
(*5) it seems like a lot of analysis and integration jobs remain when adopting 
ESAPI. I'd also like to know about bug rates relative to non-ESAPI code. I've 
been on the ESAPI mailing list for a while and can't discern from conversation 
much information regarding successful operationalization, though I hear 
rumblings of people working on this very problem. 

Cheers all,

John Steven
Senior Director; Advanced Technology Consulting
Desk: 703.404.9293 x1204 Cell: 703.727.4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven
http://www.cigital.com
Software Confidence. Achieved.

(*1) This same argument scales down into the platform further: Java EE core, 
underlying OS / hypervisor... etc. 

(*2) Message dated: Jan 2010 00:04:49, ESAPI-Users Mailing list

(*3) Quoting Manico:
> I've used regular expression multi-file-search-and-replace tricks across many 
> million LOC applications

In the case of (*3), I prefer AOP or SA to RegExp because of the provided 
type-safety of the two approaches and the option to log transforms. Though, for 
some cut-points, RegExp may be sufficient to accurately 'lock in' a cut-point.

(*4) Unlike the purely struts-based approach though, developers have to 
remember the annotation and change does necessitate recompilation and 
redeployment (be it Python, Java or .NET).

(*5) These areas appear to have received the lion share of attention to date 
(rightfully so, and to great avail) 

> On Jan 7, 2010, at 1:02 PM, John Steven  wrote:
> 
>> Jim,
>> 
>> Yours was the predicted response. The ref-impl. to API side-step
>> does not fix the flaw in the argument though.
>> 
>> No, you do not need "A" ESAPI to build secure apps.
>> 
>> Please re-read my email carefully.
>> 
>> Alternatives:
>> 1) Some organizations adopt OWASP ESAPI's ref-impl.
>> 2) Others build their own do agree and see the value; yes
>> 
>> #1 and #2 agree with your position.
>> 
>> 3) Some secure their toolkits (again, "a la secure struts")
>> 
>> Indicating such a "secure struts" is an organization's ESAPI
>> perverts the ESAPI concept far too greatly to pass muster. Indeed,
>> were it to, it would violate properties 3 and 4 (and very likely 2)
>> within my previous email's advantage list.
>> 
>> Mr. Boberski, you too need to re-read my email. I advise you
>> strongly not to keep saying that ESAPI is "like PK-enabling" an APP.
>> I don't think many people got a good feeling about how much they
>> spent on, or how effective their PKI implementation was ;-). Please
>> consider how you'd ESAPI-enable the millions of lines of underlying
>> framework code beneath the app.
>> 
>> 4) Policy + Standards, buttressed with a robust assurance program
>> 
>> Some organizations have sufficiently different threat models and
>> deployment scenarios within their 'four walls&#x

Re: [SC-L] InformIT: comparing static analysis tools

2011-02-03 Thread John Steven
All,

I followed this article up with a blog entry, more targeted at adopting 
organizations. I hope you find it useful:

http://www.cigital.com/justiceleague/2011/02/02/if-its-so-hard-why-bother/


John Steven
Senior Director; Advanced Technology Consulting
Desk: 703.404.9293 x1204 Cell: 703.727.4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven
http://www.cigital.com
Software Confidence. Achieved.


> hi sc-l,
> 
> John Steven and I recently collaborated on an article for informIT.  The 
> article is called "Software [In]security: Comparing Apples, Oranges, and 
> Aardvarks (or, All Static Analysis Tools Are Not Created Equal)" and is 
> available here:
> 
> http://www.informit.com/articles/article.aspx?p=1680863
> 
> 
> Now that static analysis tools like Fortify and Ounce are hitting the 
> mainstream there are many potential customers who want to compare them and 
> pick the best one.  We explain why that's more difficult than it sounds at 
> first and what to watch out for as you begin to compare tools.  We did this 
> in order to get out in front of "test suites" that purport to work for tool 
> comparison.  If you wonder why such suites may not work as advertised, read 
> the article.
> 
> Your feedback is welcome.



smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


[SC-L] The Organic Secure SDLC

2011-07-20 Thread John Steven
erhaps less foreboding to those getting their start and 
anointing their first victim  champion.

There's also tremendous value (to the 'Organic' model) in admitting what I 
think we all know implicitly: programs may have to "show need" (through 
assessment) in order to progress. It's not shocking to see penetration testing 
at the beginning of 'Organic'--" 'sploits always create splash" as I say. 

[Conclusion]
To me, the 'Organic' model suffers from key inaccuracies due to omission. As 
such, it doesn't particularly address the principal criticism of existing 
models. Its value stems from simplicity and an potentially clear way to drive 
its users through key tenets (my summary, not Rohit's):

A) Anoint a champion
B) Show need
C) Educate execs
D) Drive assessment earlier in lifecycle
E) Bake assessment into BAU (QA)

The ability to say these things succinctly to an organization starting its 
Application Security journey provides value. I do not, however, believe that 
we're going to see security assessment applied by business unit/product team QA 
folk in a BAU scenario in the next 3-5 years, with the notable exception of 
organizations like Microsoft. 

If 'Organic' was mine, I would attempt to amplify its positives by converting 
it from a SDL Model to a method accompanied by case studies. I'd drive it 
towards showing, backed by case study, how "climbing the wall" can be 
accomplished most effectively. This is a tough nut and one worth cracking.

-jOHN

John Steven
Senior Director; Advanced Technology Consulting
Desk: 703.404.9293 x1204 Cell: 703.727.4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

twitter: @m1splacedsoul
Rss: http://feeds.feedburner.com/M1splacedOnTheWeb
web: http://www.cigital.com
Software Confidence. Achieved.


* [BW1] 
http://www.businessweek.com/magazine/content/09_02/b4115059823953_page_2.htm
* [JB1] http://swreflections.blogspot.com/2009/04/opensamm-shows-way.html
* [JB2] 
http://swreflections.blogspot.com/2011/06/safer-software-through-secure.html
* [PC1] https://www.pcisecuritystandards.org/security_standards/
* [FM1] http://shiro.apache.org/
* [BS1] Analogous to BSIMM SM1.3 "Educate Executives":  
http://bsimm.com/online/governance/sm/?s=sm1.3#sm1.3
* [BS2] http://www.informit.com/articles/article.aspx?p=1434903

On Jul 19, 2011, at 11:24 AM, Paco Hope wrote:

> Jim,
> 
> You're spot on. BSIMM is not a lifecycle for any company. Heck, it's not even 
> a set of recommendations. It's simply a way to measure what a firm does. It's 
> a model formulated from observations about how some firms' implement software 
> security in their lifecycles. You'll never catch us calling the BSIMM a 
> lifecycle.
> 
> As for not translating into the SMB market, I don't understand that. Unlike, 
> say prescriptive standards which say "thou shalt do X" regardless of how big 
> you are, the BSIMM measures maturity of what a firm actually does. There is 
> no reason an SMB could not measure the maturity of their effort using the 
> BSIMM.
> 
> Maturity is not a function of size. A team of 10 developers might score 
> higher on various criteria than a multi-national bank that has a whole team 
> of people dedicated to app sec. Maturity is a function of the depth to which 
> one takes a certain activity and their capability within that activity.
> 
> This isn't Pac-Man, either. The goal is not to get the highest score and an 
> extra man. :) The goal is to put the right level of effort into the right 
> places. A firm can't do that until they know how much effort they're spending 
> on different activities. The BSIMM will illuminate the level of effort. It 
> allows a firm to decide to rebalance and spread the budget/people around 
> across the activities that make sense. Whether that's a team of 10 developers 
> or a team of 1000 developers, the principle is the same. The execution varies.
> 
> Here's another analogy. You can have a GPS and know your exact coordinates, 
> to within 3 meters, but not know how to get to the airport by car. The BSIMM 
> will tell you your coordinates at the present time. It does not tell you the 
> best way to the airport. It can tell you the crow-fly distance to the 
> airport, but it can't tell you that the airport is where you want to be.
> 
> Paco
> 
> 
> Paco,
> 
> By your same logic I would not consider BSIMM a lifecycle either. It's
> a thermometer to measure an SDLC against what some some of the largest
> companies are doing. As others have noted, BSIMM  does not translate
> well into the SMB market where most software is written. Don't get me
> wrong, BSIMM is very interesting data and is useful. But a
> comprehensive secure software lifecycle for every company it is not.
> 
> - Jim Manico


smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] BSIMM-V Article in Application Development Times

2014-01-08 Thread John Steven
Christian, (Stephen)

I’ll confess I’ve only skimmed the discussion but it looks productive. The 
questions posed are good ones. I’ll try to provide a few clarifications from 
“inside” the BSIMM study that may be helpful in pushing the discussion along:

1) Survey structure/technique attributes BSIMM activities by first seeking
certain confirmations. If interview-based confirmation doesn't provide
confidence, the subject is asked for documentation. 

2) The BSIMM document indicates who is interviewed, but it’s not an 
exhaustive list. Where confirmation is necessary Dev/Config 
Management, architects, and others make the list. 

3) Surveying claims to (and in practice) stops short of concrete attestation
across the board. 

4) BSIMM survey targets have included organizations, business units, and 
in rare cases, smaller scopes.

5) At the organization-level, survey confirmation includes facilities to 
differentiate “one group does it”,  “this is done by most”, and “the
organization governs mandatory activity conduct”.

6) An organization does/should not get credit for “one group does it”. 

7) More qualified BSIMM interviewers exist than Sammy et al. More are 
minted as the study grows in size. There isn’t a written certification and
   a pin, but there is an involved apprenticeship. And, Sammy runs cross-
   checking of the grading process to make sure that interviewers remain
   convergent in grading criteria. 

Addressing another question raised by the email chain below: just because the 
organization does the activity—as a rule—doesn’t mean that every team does it. 
Non-complaince may be a reason. Another (better) reason may be that the 
organization takes a “risk-based approach” to the activity. In other words, an 
organization may choose to do more mature architecture analysis activities on 
only a subset of applications—those that are higher risk.  This is what BSIMM 
activities Strategy and Metrics (SM) Level 3 are about.

Hopefully that helps a bit. 
-jOHN
 
John Steven 
iCTO, Cigital
+1,703-727-4034   |  @M1splacedsoul
https://google.com/+JohnStevenCigital

On Jan 7, 2014, at 8:07 PM, Christian Heinrich  
wrote:

> Stephen,
> 
> On Sat, Jan 4, 2014 at 8:12 PM, Stephen de Vries
>  wrote:
>> Leaving the definition of agile aside for the moment, doesn’t the fact that 
>> the BSIMM measures
>> organisation wide activities but not individual dev teams mean that we could 
>> be drawing inaccurate
>> conclusions from the data?  E.g.  if an organisation says it is doing Arch 
>> reviews, code reviews and
>> sec testing, it doesn’t necessarily mean that every team is doing all of 
>> those activities, so it may give
>> the BSIMM reader a false impression of the use of those activities in the 
>> real world.
>> 
>> In addition to knowing which activities are practiced organisation wide, it 
>> would also be valuable to
>> know which activities work well on a per-team or per-project basis.
> 
> My reading of the "Roles" section of BSIMM-V.pdf is that the people
> interviewed for the BSIMM sample are:
> 1. Executive Leadership (or CISO, VP of Risk, CSO, etc)
> 2. Everyone else within the Software Security Group (SSG)
> 
> What you are asking to be included is what is referred to as the
> "Satellite" within BSIMM-V.pdf and I believe this may also require the
> inclusion of http://cmmiinstitute.com/cmmi-solutions/cmmi-for-development/
> too (why not :) ).
> 
> The issue with this is that it would invalidate the statistics from
> the prior five BSIMM releases due to the inclusion of new questions
> and in additional these new statistics were not gathered over time
> either hence the improvements measured over time within BSIMM would be
> invalid too due tot he new dataset.
> 
> Furthermore, Gary, Sammy and Brian have limited time to interview all
> 67 BSIMM participating firms.
> 
> However, I would be interested to know the "BSIMM Advisory Board" i.e.
> http://bsimm.com/community/ view on this is and if it would be
> possible to undertake this additional sampling within their own BSIMM
> participating firm to determine if there is additional value would be
> gained for BSIMM?  However, I suspect that an objective measurement
> would be too hard to quantify due to internal politics of each BSIMM
> participating firm but I could be wrong.


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


RE: [SC-L] Interesting article on the adoption of Software Security

2004-06-09 Thread John Steven


Re: [SC-L] ZDNnet: Securing data from the threat within [bybuying products]

2005-01-19 Thread John Steven
All,

I agree with Crispin in that: RBAC will not completely alleviate the
problem. Whether or not you have a handle on role-based access controls (No:
most people don't--they're still trying to figure out how to roll out
directory services) those authorized users within your circles of trust will
still unwittingly leak information and be socially engineered. Their
privilege may even be leveraged when a [potentially external] threat attacks
an organization's software by subverting authentication, client/server
access schemes, or more directly by subverting or forging: tokens / keys /
passphrases / [whatever].

There are a lot clients in my portfolio that are trying to adopt RBAC, code
signing/privilege, and other more advanced software controls. Are they going
to succeed wholesale throughout their organization: No, not for the next 3
years. Should they scrap this massive expenditure? I say no. Will it solve
the problem? (as we've said: No) It's beneficial though: Yes, here's why:

In the above attack scenarios having a detailed RBAC scheme deployed means
that when access is stolen authorization isn't all-encompassing. This
yields:

***RBAC Advantage #1: RBAC can reduce the impact (of access/auth) of a
successful attack. The attacker gains only the privileges of the victimized
user/role.

Organizations need not role out RBAC to the entire Enterprise to see value
from it. Start small with both roles AND applications.

***Guide: Start with that infrastructure and those applications that are
built on a platform supporting user, role, and code privilege: J2EE or .NET.

***Guide: Start with an application whose roles are well defined because
they're central to your organization's business. It's essential that users
fit within a single role well here too, as you don't want to try to tackle
delegation, entitlement, and all that in your pilot. It's ok if your target
application interacts with partners, it need not be entirely internal, as
long as you can associate roles easily with your partner's users.

***Avoid: those applications that interact with other applications (or
partners) through a conduit that authenticates host-to-host only...
Especially it's known that rich role structure should exist on top of
that--but currently doesn't.

In order for RBAC to succeed, an organization needs to begin tackling role
definition and data sensitivity classifications. This should be an essential
piece of software's development anyways.

*** RBAC Advantage #2: RBAC forces use case an organization to conduct
activities that help define sensitive classes of data and their mapping to
roles and privileges through workflows. Now the dev. organization has a
reason to think about data, roles, workflows, as part of use case creation
and requirements engineering. They even have an element of design to
characterize what might have otherwise have been trapped in use cases,
policy,  and ignored.

As an organization gets more experienced and capable at defining roles,
privilege, and sensitivity of data, they can start to make this more of an
Enterprise-wide pursuit.

***Avoid: trying to hold app teams responsible for things their
platform/toolkits don't support. It's still useful to have your C
programmers think about workflows, misuse, privilege, and so forth, but it's
ridiculous to try to have them hammer some RBAC-like nonsense into
production code. NOTE: I'm not advocating they ignore things like
authentication...

***Guide: As systems begin to interoperate, make sure that roles expand
based on real-life workflows. In cases where a role's privilege changes as
it moves between application contexts, handle that with programmatic and
declarative security mechanisms. Excellent, now you can use THOSE features
of the J2EE and .NET platforms too.

***Avoid: Allowing the overloading of role names, and roles gaining
disparate meaning in the vacuum of only a single app. When apps begin
interoperate the privilege isomorphism will break down and lead to very
inappropriate user privileges.

I'll stop here This has begun more a defense of RBAC as a pursuit than a
response to the original article. Still, I think RBAC _IS_ a valuable
thought tool facilitates development "building security in" to an
application, even if they're only changing what they think about when
they're conducting software development activities and not effectively using
all the whiz-bang RBAC capabilities of toolkit XYZ.  As always:

***ULTIMATE 'TO AVOID': adopting SSO, RBAC, or other acronyms as 'features'
that will simply by inclusion (and your own ignorance) lead you to believe
that your software is more secure.

-
John Steven
Managing Director, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 585 8659 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F

Re: [SC-L] Information Security Considerations for Use Case Modeling

2005-06-27 Thread John Steven
d). Keep
communication between architects and requirements analysts high, but remain
formal about their hand off. Let analysts own requirements def., and keep
them focused on the user's/system's security goals. Hand off to designers to
get them to "sign up" to system construction, and only then deal with
constraints. To go further into the particulars here, I'd have to inject a
ton of change management text... So I punt here.

This, IMO, leads to much more intelligent testing than "Is SSL enabled?"
Check. By specifying requirements that speak to security goals and attack
resistance, you've given testers more wherewithal as to how to stress the
system, as an attacker would.

***Specific Tip: Leave no goal unexplored before beginning to architect. Do
not use architecture definition as a mechanism for exploring software
security goals.

***Specific Tip: Use your goals and high-level security requirements to
excise security mechanisms or expenditure that goes well above-and-beyond
your risks

**Use Risk Analysis and Threat Modeling to Curb Security Requirements
Explosion*
 Just as threat modeling and risk analysis can create security requirements,
they can be used to constrain their unbounded growth as well. Risk analysis
is particularly useful in determining whether or not you have too many (or
onerous) security requirements) initially. Threat modeling, which requires
at least initial design at its core, can help with requirements work during
change management activities.

Purely focusing on their requirements pruning potential, these two
activities allow a development team to prioritize what attacks will possess
the highest impact, and focus on requirements and design to address only
these issues.

So, this is just a sampling from a larger laundry list. But, hopefully it
provides some more guidance to those whose appetites are whet from Gunnar
and Johan's posts.

-
John Steven
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908


> From: Gunnar Peterson <[EMAIL PROTECTED]>
>
> When I coach teams on security in the SDLC, I ask them to first see
> what mileage they can get out of existing artifacts, like Use Cases,
> User Stories, and so on. While these artifacts and processes were not
> typically designed with security in mind, there is generally a lot of
> underutilized properties in them that can be exploited by security
> architects and analysts.
>
> The Use Case model adds traceability to security requirements, but
> just as importantly it allows the team to see not just the static
> requirements, rather you can the requirements in a behavioral flow.
> Since so much of security is protocol based and context sensitive,
> describing the behavioral aspects is important to comprehensibility.
>
> At the end of exploring existing artifacts, then there needs to be a
> set of security-centric artifacts like threat models, misuse cases,
> et. al. The output, e.g. design decisions, of these security-centric
> models are fed back into the requirements in an iterative fashion.
>
> Security analysts and architects cannot do all the work that goes
> into secure software development by themselves. There may be a
> handful of security people supporting hundreds of developers. This is
> why we need to educate not just developers on writing secure code,
> but also business analysts on security Use Cases, requirements, etc.
> (the main purpose of my article), testers on how to write/run/read
> security test cases, an so on.
>





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.





Re: [SC-L] Spot the bug

2005-07-19 Thread John Steven
I'm excited that Microsoft is reaching out and providing this learning aid.
Most people I interview don't know how to spot some pretty simply vulnerable
code constructs. I'll even have my newbies subscribe to this RSS for a
spell, in hopes that their attack toolkit may be augmented.

But, some advice for Microsoft if they're listening:

When the initial entrées are so ridiculously simple that they don't even
bear a full minute of scrutiny, they are best served in sets of 10. That
gives the audience enough problems to puzzle through that they can mentally
engage. 

Long-term, I don't fear the validity of the approach because some
exploitable constructs are very subtle.

-
John Steven
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 404 9295 - Fax
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908


> From: Mark Curphey <[EMAIL PROTECTED]>
> 
> If you fancy yourself as a good code reviewer you can play spot the bug at
> MSDN. They will be getting harder !
> 
> http://msdn.microsoft.com/security/





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.






Re: [SC-L] Bugs and flaws

2006-02-01 Thread John Steven
my experience
over a number of assessments to "upcast" typically endemic problems as flaws
(and solve them in the design or architecture) and "downcast" those problems
that have glaring quick-fixes. In circumstances where both those heuristics
apply, I suggest a tactical fix to the bug, while prescribing that further
analysis take the tack of further fleshing out the flaw.

Is this at all helpful?


-
John Steven
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908


> From: Crispin Cowan <[EMAIL PROTECTED]>
> 
> Gary McGraw wrote:
>> If the WMF vulnerability teaches us anything, it teaches us that we need
>> to pay more attention to flaws.
> The "flaw" in question seems to be "validate inputs", i.e. don't just
> trust network input (esp. from an untrusted source) to be well-formed.
> 
> Of special importance to the Windows family of platforms seems to be the
> propensity to do security controls based on the file type extension (the
> letters after the dot in the file name, such as .wmf) but to choose the
> application to interpret the data based on some magic file typing based
> on looking at the content.
> 
> My favorite ancient form of this flaw: .rtf files are much safer than
> doc files, because the RTF standard does not allow you to attach
> VBscript (where "VB" stands for "Virus Broadcast" :) while .doc files
> do. Unfortunately, this safety feature is nearly useless, because if you
> take an infected whatever.doc file, and just *rename* it to whatever.rtf
> and send it, then MS Word will cheerfully open the file for you when you
> double click on the attachment, ignore the mismatch between the file
> extension and the actual file type, and run the fscking VB embedded within.
> 
> I am less familiar with the WMF flaw, but it smells like the same thing.
> 
> Validate your inputs.
> 
> There are automatic tools (taint and equivalent) that will check whether
> you have validated your inputs. But they do *not* check the *quality* of
> your validation of the input. Doing a consistency check on the file name
> extension and the data interpreter type for the file is beyond (most?)
> such checkers.
> 
>>   We spend lots of time talking about
>> bugs in software security (witness the perpetual flogging of the buffer
>> overflow), but architectural problems are just as important and deserve
>> just as much airplay.
>>   
> IMHO the difference between "bugs" and "architecture" is just a
> continuous grey scale of degree.




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-02 Thread John Steven
Kevin,

Jeff Payne and I were talking about this last night. Jeff's position was,
"...Or, you could just use the existing quality assurance terminology and
avoid the problem altogether." I agree with you and him; standardizing
terminology is a great start to obviating confusing discussions about what
type of problem the software faces.

Re-reading my post, I realize that it came off as heavy support for
additional terminology. Truth is, we've found that the easiest way to
communicate this concept to our Consultants and Clients here at Cigital has
been to build the two buckets (flaws and bugs).

What I was really trying to present was that Security people could stand to
be a bit more thorough about how they synthesize the results of their
analysis before they communicate the vulnerabilities they've found, and what
mitigating strategies they suggest.

I guess, in my mind, the most important things with regard to classifying
the mistakes software people make that lead to vulnerability (the piety of
vulnerability taxonomies aside) is to support:

1) Selection of the most effective mitigating strategy -and-
2) Root cause analysis that will result in changes in software development
that prevent software folk from making the same mistake again.

-
John Steven
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

> From: "Wall, Kevin" <[EMAIL PROTECTED]>
> 
> John Steven wrote:
> ...
>> 2) Flaws are different in important ways bugs when it comes to presentation,
>> prioritization, and mitigation. Let's explore by physical analog first.
> 
> Crispin Cowan responded:
>> I disagree with the word usage. To me, "bug" and "flaw" are exactly
>> synonyms. The distinction being drawn here is between "implementation
>> flaws" vs. "design flaws". You are just creating confusing jargon to
>> claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
>> ::= bug. A vulnerability is a special subset of flaws/defects/bugs that
>> has the property of being exploitable.
> 
> I'm not sure if this will clarify things or further muddy the waters,
> but... partial definitions taken SWEBOK
> (http://www.swebok.org/ironman/pdf/Swebok_Ironman_June_23_%202004.pdf)
> which in turn were taken from the IEEE standard glossary
> (IEEE610.12-90) are:
> + Error: "A difference.between a computed result and the correct result"
> + Fault: "An incorrect step, process, or data definition
>   in a computer program"
> + Failure: "The [incorrect] result of a fault"
> + Mistake: "A human action that produces an incorrect result"
> 
> Not all faults are manifested as errors. I can't find an online
> version of the glossary anywhere, and the one I have is about 15-20 years old
> and buried somewhere deep under a score of other rarely used books.
> 
> My point is though, until we start with some standard terminology this
> field of information security is never going to mature. I propose that
> we build on the foundational definitions of the IEEE-CS (unless there
> definitions have "bugs" ;-).
> 
> -kevin
> ---
> Kevin W. Wall  Qwest Information Technology, Inc.
> [EMAIL PROTECTED] Phone: 614.215.4788
> "The reason you have people breaking into your software all
> over the place is because your software sucks..."
>  -- Former whitehouse cybersecurity advisor, Richard Clarke,
> at eWeek Security Summit




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-03 Thread John Steven
Ah,

The age-old Gary vs. jOHN debate. I do believe along the continuum of
architecture-->design-->impl. that I've shown the ability to discern flawed
design from source code in source code reviews.

Cigital guys reading this thread have an advantage in that they know both
the shared and exclusive activities defined as part of our architectural and
code review processes. The bottom line is this: as you look at source code,
given enough gift for architecture, you can identify _some_ of the design
(whether intended or implemented) from the implementation, and find _some_
flaws. Before you get wound up and say, "Maybe you jOHN" tongue fully
in-cheek, the Struts example I gave is one case. Looking at a single class
file (the privileged Servlet definition), you can determine that the Lead
Developer/Architect has not paid enough attention to authorization when
he/she designed how the application's functionality was organized.
Admittedly, _some_ (other) architectural flaws do demand attention paid only
through activities confined to architectural analysis--not code review.
 
Think back again to my original email. The situations I present (both with
the physical table and Struts) present a 'mistake' (IEEE parlance) that can
manifest itself in terms of both an architectural flaw and implementation
bug (Cigital parlance).

I believe that the concept that Jeff (Payne), Cowan, Wysopal, and even
Peterson (if you bend it correctly) present is that the 'mistake' may
cross-cut the SDLC--manifesting itself in each of the phases' artifacts. IE:
If the mistake was in requirements, it will manifest itself in design
deficiency (flaw), as well as in the implementation (bug).

Jeff (Williams) indicates that, since progress roles downstream in the SDLC,
you _could_ fix the 'mistake' in any of the phases it manifests itself, but
that an efficiency argument demands you look in the code. I implore the
reader recall my original email. I mention that when characterized as a bug,
the level of effort required to fix the 'mistake' is probably less than if
it's characterized as a flaw. However, in doing so, you may miss other
instances of the mistake throughout the code.

I whole-heartedly agree with Jeff (Williams) that:

1) Look to the docs. for the 'right' answer.
2) Look to the code for the 'truth'.
3) Look to the deployed bins. for 'God's Truth'.
 
The variance in these artifacts is a key element in Cigital's architectural
analysis.

Second, (a point I made in my original email) the objective is to give the
most practical advise as possible to developers for fixing the problem. I'll
just copy-paste it from the original:
-
Summarizing, my characterization of a vulnerability as a bug or a flaw has
important implications towards how it's mitigated. In the case of the Struts
example, the bug-based fix is easiest--but in so characterizing the problem
I may (or may not) miss other instances of this vulnerability within the
application's code base.

How do I know how to characterize a vulnerability along the continuum of
bugs-->flaws?  I don't know for sure, but I've taken to using my experience
over a number of assessments to "upcast" typically endemic problems as flaws
(and solve them in the design or architecture) and "downcast" those problems
that have glaring quick-fixes. In circumstances where both those heuristics
apply, I suggest a tactical fix to the bug, while prescribing that further
analysis take the tack of further fleshing out the flaw.
-

Where my opinion differs from the other posters is this: I believe:
"Where a 'mistake' manifests itself in multiple phases of the software
development lifecycle, you're most apt to completely MITIGATE its effects by
characterizing it as early in the lifecycle as possible, as design or even
requirements. As Williams indicates, to the contrary, you may FIND the
problem most easily later in the lifecycle. Perhaps in the code itself."

Look, 
McGraw put forth the 'bug' and 'flaw' nomenclature. It's useful because
there is value in explicitly pinning the vulnerability in architecture,
design, or code if it helps the dev. org. get things sorted out securely and
throughout their application. My experience is that this value is real.

The message of the  'defect'/'mistake' purist resonates with me as well:
it's all simply a mistake some human made along the path of developing the
application. But! I can assure you, to the extent that root-cause analysis
is valuable, telling a dev. team where to most effectively contend with a
vulnerability is also valuable.

In other words, "smart guys will always find the problems--by hook, or by
crook--but it takes classification to aid in efficient and thorough
mitigation".
 
-
John Steven
P

[SC-L] The role static analysis tools play in uncovering elements of design

2006-02-03 Thread John Steven
Title: The role static analysis tools play in uncovering elements of design  



Jeff,

An unpopular opinion I’ve held is that static analysis tools, while very helpful in finding problems, inhibit a reviewer’s ability to find collect as much information about the structure, flow, and idiom of code’s design as the reviewer might find if he/she spelunks the code manually.

I find it difficult to use tools other than source code navigators (source insight) and scripts to facilitate my code understanding (at the design-level). 

Perhaps you can give some examples of static analysis library/tool use that overcomes my prejudice—or are you referring to the navigator tools as well?

-
John Steven   
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

  
snipped
Static analysis tools can help a lot here. Used properly, they can provide
design-level insight into a software baseline. The huge advantage is that
it's correct.

--Jeff 
snipped

This electronic message transmission contains information that may be confidential or privileged.  The information contained herein is intended solely for the recipient and use by any other party is not authorized.  If you are not the intended recipient (or otherwise authorized to receive this message by the intended recipient), any disclosure, copying, distribution or use of the contents of the information is prohibited.  If you have received this electronic message transmission in error, please contact the sender by reply email and delete all copies of this message.  Cigital, Inc. accepts no responsibility for any loss or damage resulting directly or indirectly from the use of this email or its contents.Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Ajax one panel

2006-05-22 Thread John Steven

Johan,

Yes, the attacks are feasible. Please refer to the Java language  
spec. on inner/outer class semantics and fool around with simple test  
cases (and javap -c) to show yourself what's happening during the  
compile step.


Attacks require getting code inside the victim VM but mine pass  
verification silently (even with verifier turned on). Calling the  
privileged class to lure it into doing your bidding requires only an  
open package (not signed and sealed -- again see spec.) and other fun- 
and-excitement can be had if the Developer hasn't been careful enough  
to define the PriviledgedAction subclass as an explicit top-level  
class and they've passed information to-and-fro using the inner class  
syntactic sugar--rather than explicit method calls defined pre- 
compile time.


----
John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F
http://www.cigital.com
Software Confidence. Achieved.


On May 21, 2006, at 8:23 AM, Johan Peeters wrote:

That sounds like a very exciting idea, but I am not sure about the  
mechanics of getting that to work. I assume the permissions for the  
untrusted code would be in the closure's environment. Who would put  
them there? How would the untrusted code call privileged code?

Has anyone done this?

kr,

Yo

Gary McGraw wrote:

Hi yo!
Closure is very helpful when doing things like crossing trust  
boundaries.  If you look at the craziness involved in properly  
invoking the doprivilege() stuff in java2, the need for closure is  
strikingly obvious.
However, closure itself is not as important as type safety is.
So the fact that javascript may (or may not) have closure fails in  
comparison to the fact that it is not type safe.

Ajax is a disaster from a security perspective.
gem
 -Original Message-
From:   Johan Peeters [mailto:[EMAIL PROTECTED]
Sent:   Sat May 20 15:44:46 2006
To: Gary McGraw
Cc: Mailing List, Secure Coding; SSG
Subject:Re: [SC-L] Ajax one panel
I think Java would have been a better language with closures, but  
I am intrigued that you raise them here. Do you think closures  
present security benefits? Or is this a veiled reference to Ajax?  
I guess JavaScript has closures.

kr,
Yo
Gary McGraw wrote:
Ok...it was java one.  But it seemed like ajax one on the show  
floor.   I participated in a panel yesterday with superstar bill  
joy.  I had a chance to talk to bill for a while after the gig  
and asked him why java did not have closure.  Bill said he was on  
a committee of five, and got out-voted 2 to 3 on that one (and  
some other stuff too).  You know the other pro vote had to be guy  
steele.  Most interesting.  Tyranny of the majority even in java.


Btw, bill also said they tried twice to build an OS on java and  
failed both times.  We both agree that a type safe OS will happen  
one day.


Here's a blog entry from john waters that describes the panel  
from his point of view.


http://www.adtmag.com/blogs/blog.aspx?a=18564

gem
www.cigital.com
www.swsec.com


Sent from my treo.


 

This electronic message transmission contains information that  
may be
confidential or privileged.  The information contained herein is  
intended
solely for the recipient and use by any other party is not  
authorized.  If
you are not the intended recipient (or otherwise authorized to  
receive this
message by the intended recipient), any disclosure, copying,  
distribution or
use of the contents of the information is prohibited.  If you  
have received
this electronic message transmission in error, please contact the  
sender by
reply email and delete all copies of this message.  Cigital, Inc.  
accepts no
responsibility for any loss or damage resulting directly or  
indirectly from

the use of this email or its contents.
Thank You.
 



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/ 
listinfo/sc-l
List charter available at - http://www.securecoding.org/list/ 
charter.php





--
Johan Peeters
program director
http://www.secappdev.org
+32 16 649000





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please co

Re: [SC-L] RE: Comparing Scanning Tools

2006-06-14 Thread John Steven

All,

Sorry it took so long, but I've finally got the new string of  
Building Security In (BSI) articles up on Cigital's website. Brian  
Chess (of Fortify Software) and Pravir Chandra (of Secure Software)  
and I collaborated on an article regarding adopting code analysis  
tools that might be of interest:


http://www.cigital.com/papers/download/j3bsi.pdf

Check it out. I'd say it's "up and coming" rather than "here", but  
some of my more advanced clients have surprisingly good ideas on how  
to assure outsourced development. As one might imagine, they involve:


* Running code analysis tools, penetration tools
* Defining/running programmatic destructive (what they call UAT,  
though they're much deeper) tests
* Incorporating language (in addition to what's provided by OWASP)  
about SLA, QoS, and vulnerability remediation during maintenance

* and other controls

I've seen/helped in rare cases with conducting software architectural  
analyses to determine if the vendor's solution introduced security  
flaws in pursuit of the contracted requirements.


Of course, hard problems still exist... not the least of which being  
the pragmatics of allowing off-shore vendors to promote into  
production, hold staging or production secrets, access to production  
data stores, and so forth.


It's no shock that an organization must have a handle on how much  
software development and maintenance really costs before it allows  
these budgetary 'hits' explicitly. In the end though, they'll get  
paid out anyways on the backend.



John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F
http://www.cigital.com
Software Confidence. Achieved.


On Jun 9, 2006, at 2:32 PM, Jeremy Epstein wrote:


--===1664004964==
Content-Type: multipart/alternative;
boundary="_=_NextPart_001_01C68BF3.086B16AC"

This message is in MIME format. Since your mail reader does not  
understand

this format, some or all of this message may not be legible.

--_=_NextPart_001_01C68BF3.086B16AC
Content-Type: text/plain

At the RSA Conference in February, I went to a reception hosted by  
a group
called "Secure Software Forum" (not to be confused with the company  
Secure
Software Inc, which offers a product competitive to Fortify).  They  
had a
panel session where representatives from a couple of companies not  
in the

software/technology business claimed that they're making contractual
requirements in this area (i.e., that vendors are required to  
assert as part
of the contract what measures they use to assure their code).  So I  
guess
there's proof by construction that companies other than Microsoft &  
Oracle

care.

Having said that, it's completely at odds compared to what I see  
working for

an ISV of a non-security product.  That is, I almost never have
prospects/customers ask me what we do to assure our software. If it  
happened
more often, I'd be able to get more budget to do the analysis that  
I think

all vendors should do :-(

--Jeremy

P.S. Since Brian provided a link to a press release about Oracle using
Fortify, I'll offer a link about a financial services company using  
Secure

Software: http://www.securesoftware.com/news/releases/20050725.html
<http://www.securesoftware.com/news/releases/20050725.html>


  _

From: [EMAIL PROTECTED] [mailto:sc-l- 
[EMAIL PROTECTED]

On Behalf Of McGovern, James F (HTSC, IT)
Sent: Friday, June 09, 2006 12:10 PM
To: Secure Mailing List
Subject: RE: [SC-L] RE: Comparing Scanning Tools


I think I should have been more specific in my first post. I should  
have
phrased it as I have yet to find a large enterprise whose primary  
business
isn't software or technology that has made a significant investment  
in such

tools.

Likewise, a lot of large enteprrises are shifting away from  
building inhouse
to either outsourcing and/or buying which means that secure coding  
practices
should also be enforced via procurement agreements. Has anyone here  
ran

across contract clauses that assist in this regard?

-Original Message-
From: Gunnar Peterson [mailto:[EMAIL PROTECTED]
Sent: Friday, June 09, 2006 8:48 AM
To: Brian Chess; Secure Mailing List; McGovern, James F (HTSC, IT)
Subject: Re: [SC-L] RE: Comparing Scanning Tools


Right, because their customers (are starting to) demand more secure  
code
from their technology. In the enterprise space the financial,  
insurance,
healthcare companies who routinely lose their customer's data and  
provide
their customers with vulnerability-laden apps have not yet seen the  
same
amount of customer demand for this, but 84 million public lost  
records later

( http://www.privacyrights.org/ar/ChronDataBreaches.htm)
<http://www.privacyrights.org/ar/ChronDataBrea

Re: [SC-L] Code Analysis Tool Bakeoff

2007-01-08 Thread John Steven
I think Gunnar hit a lot of the important points. Bake offs do  
provide interesting data. I have a few slide decks which I've created  
to help companies with this problem, and would be happy to provide  
them to anyone willing to email me side-channel. Of the items Gunnar  
listed, I find that baking off tools helps organizations understand  
where they're going to have to apply horsepower and money.

For instance, companies that purchase Coverity's Prevent seem to have  
little trouble getting penetration into their dev. teams, even beyond  
initial pilot.  Model tuning provides breeze-easy ability to keep  
'mostly effective' rules in play and still reduce false positives.  
However, with that ease of adoption and developer-driven results  
interpretation, orgs. buy some inflexibility in terms of later  
extensibility. Java support, now only in beta, lacks sorely and the  
mechanisms by which one writes custom checkers poses a stiff learning  
curve. Whereas, when one adopts Fortify's sourceAnalyzer, developer  
penetration will be _the_ problem unless the piloting team bakes a  
lot of rule tuning into the product's configuration and results  
pruning into the usable model prior to role out. However, later  
customization seems easiest of any of the tools I'm familiar with.  
Language and rules coverage seems, at the macro-level, consistently  
the most robust.

In contrast, it takes real experience to illuminate each tool's  
difference in the accuracy department. Only a bakeoff that contains  
_your_ organization's code can help cut through the fog of what each  
vendor's account manager will promise. The reason seems to be that  
the way a lot of these tools behave relative to each other  
(especially Prexis, K7, and Source Analyzer) depends greatly on  
minute details of how they implemented rules. However, at the end of  
the day, their technologies remain shockingly similar (at least as  
compared to products from Coverity, Secure Software, or Microsoft's  
internal Prefix).

For instance, in one bake off, we found that (with particular open  
source C code) Fortify's tool found more unique instances of  
overflows on stack-based, locally declared buffers, with offending  
locally declared length-specifiers. However, Klocwork's tool was  
profoundly more accurate in cases in which the overflow had similar  
properties but represented an 'off by one' error within a buffer  
declared as a fixed length array.

Discussing tradeoffs in tool implementation at this level leads  
bakers down a bevy of rabbit holes. Looking at them to the extent  
Cigital does, for deep understanding of our clients' code and how  
_exactly_ the tool is helping/hurting us isn't _your_ goal. But, by  
collecting data on 7 figures of your own code base, you can start to  
see what trends in your programmers' coding practices play to which  
tools. This, can in fact, help you make a better tool choice.


John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F
http://www.cigital.com
Software Confidence. Achieved.


On Jan 6, 2007, at 11:27 AM, Gunnar Peterson wrote:

>> 1. I haven't gotten a sense that a bakeoff matters. For example,  
>> if I wanted
>> to write a simple JSP application, it really doesn't matter if I  
>> use Tomcat,
>> Jetty, Resin or BEA from a functionality perspective while they  
>> may each have
>> stuff that others don't, at the end of the day they are all good  
>> enough. So is
>> there really that much difference in comparing say Fortify to  
>> OunceLabs or
>> whatever other tools in this space exist vs simply choosing which  
>> ever one
>> wants to cut me the best deal (e.g. site license for $99 a year :-) ?
>>
>
> I recommend that companies do a bakeoff to determine
>
> 1. ease of integration with dev process - everyone's dev/build  
> process is
> slightly different
>
> 2. signal to noise ratio - is the tool finding high priority/high  
> impact
> bugs?
>
> 3.  remediation guidance - finding is great, fixing is better, how
> actionable and relevant is the remediation guidance?
>
> 4. extensibility - say you have a particular interface, like mq  
> series for
> example, which has homegrown authN and authZ foo that you want to  
> use the
> static analysis to determine if it is used correctly. How easy is it
> build/check/enfore these rules?
>
> 5. roles - how easy is it to separate out roles/reports/ 
> functionaility like
> developer, ant jockey, and auditor?
>
> 6. software architecture span - your high risk/high priority apps are
> probably multi-tier w/ lots of integration points, how much  
> 

Re: [SC-L] How is secure coding sold within enterprises?

2007-03-19 Thread John Steven
s long enough for now. If there are topics you'd like me  
to enumerate more fully, or if I've missed something, shoot me an email.


Hope this helps, and sorry I didn't just attach a PPT ;)


John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F

Blog: http://www.cigital.com/justiceleague
http://www.cigital.com
Software Confidence. Achieved.


On Mar 19, 2007, at 4:12 PM, McGovern, James F ((HTSC, IT)) wrote:

I agree with your assessment of how things are sold at a high-level  
but still struggling in that it takes more than just graphicalizing  
of your points to sell, hence I am still attempting to figure out a  
way to get my hands on some PPT that are used internal to  
enterprises prior to consulting engagements and I think a better  
answer will emerge. PPT may provide a sense of budget, timelines,  
roles and responsibilities, who needed to buy-in, industry metrics,  
quotes from noted industry analysts, etc that will help shortcut my  
own work so I can start moving towards the more important stuff.

-Original Message-
From: Andrew van der Stock [mailto:[EMAIL PROTECTED]
Sent: Monday, March 19, 2007 2:50 PM
To: McGovern, James F (HTSC, IT)
Cc: SC-L
Subject: Re: [SC-L] How is secure coding sold within enterprises?

There are two major methods:

Opportunity cost / competitive advantage (the Microsoft model)
Recovery cost reductions (the model used by most financial  
institutions)


Generally, opportunity cost is where an organization can further  
its goals by a secure business foundation. This requires the CIO/ 
CSO to be able to sell the business on this model, which is hard  
when it is clear that many businesses have been founded on insecure  
foundations and do quite well nonetheless. Companies that choose to  
be secure have a competitive advantage, an advantage that will  
increase over time and will win conquest customers. For example  
(and this is my humble opinion), Oracle’s security is a long  
standing unbreakable joke, and in the meantime MS ploughed billions  
into fixing their tattered reputation by making it a competitive  
advantage, and thus making their market dominance nearly complete.  
Oracle is now paying for their CSO’s mistake in not understanding  
this model earlier. Forward looking financial institutions are now  
using this model, such as my old bank’s (with its SMS transaction  
authentication feature) winning many new customers by not only  
promoting themselves as secure, but doing the right thing and  
investing in essentially eliminating Internet Banking fraud. It  
saves them money, and it works well for customers. This is the best  
model, but the hardest to sell.


The second model is used by most financial institutions. They are  
mature risk managers and understand that a certain level of risk  
must be taken in return for doing business. By choosing to invest  
some of the potential or known losses in reducing the potential for  
massive losses, they can reduce the overall risk present in the  
corporate risk register, which plays well to shareholders. For  
example, if you invest $1m in securing a cheque clearance process  
worth (say) $10b annually to the business, and that reduces check  
fraud by $5m per year and eliminates $2m of unnecessary overhead  
every year, security is an easy sell with obvious targets to  
improve profitability. A well managed operational risk group will  
easily identify the riskiest aspects of a mature company’s  
activities, and it’s easy to justify improvements in those areas.


The FUD model (used by many vendors - “do this or the SOX boogeyman  
will get you”) does not work.


The do nothing model (used by nearly everyone who doesn’t fall into  
the first two categories) works for a time, but can spectacularly  
end a business. Card Systems anyone? Unknown risk is too risky a  
proposition, and is plain director negligence in my view.


Thanks,
Andrew


On 3/19/07 11:35 AM, "McGovern, James F (HTSC, IT)"  
<[EMAIL PROTECTED]> wrote:


I am attempting to figure out how other Fortune enterprises have  
went about selling the need for secure coding practices and can't  
seem to find the answer I seek. Essentially, I have discovered that  
one of a few scenarios exist (a) the leadership chain was highly  
technical and intuitively understood the need (b) the primary  
business model of the enterprise is either banking, investments,  
etc where the risk is perceived higher if it is not performed (c)  
it was strongly encouraged by a member of a very large consulting  
firm (e.g. McKinsey, Accenture, etc).


I would like to understand what does the Powerpoint deck that  
employees of Fortune enterprises use to sell the concept PRIOR to  
bringing in consultants and vendors to help them fulfill the need.  
Has anyone ran across any PPT that best outlines this for  
demograph

Re: [SC-L] How is secure coding sold within enterprises?

2007-03-20 Thread John Steven

James,

I can't believe I forgot to mention the presentation before mine at  
that particular OWASP con. Anthony Canike did an exceptional job  
chronicling what he had done at Vanguard. This presentation, if I  
recall correctly, should have some fodder for you.


www.owasp.org/images/0/05/AppSec2005DC-Anthony_Canike- 
Enterprise_AppSec_Program.ppt



John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F

Blog: http://www.cigital.com/justiceleague
http://www.cigital.com
Software Confidence. Achieved.


On Mar 19, 2007, at 9:55 PM, John Steven wrote:


Andrew, James,

Agreed, Microsoft has put some interesting thoughts out in their  
SDL book. Companies that produce a software product will find a lot  
of this approach resonates well. IT shops supporting financial  
houses will have more difficulty. McGraw wrote a decent blog entry  
on this topic:


http://www.cigital.com/justiceleague/2007/03/08/cigitals- 
touchpoints-versus-microsofts-sdl/


Shockingly, however, I seem to be his only commentator on the topic.

I think James will find Microsoft's literature falls terribly short  
of even the raw material required to produce the PPT he desires.  
Let's see what we can do for him.


First: audience. I'm not sure of James' position, but it doesn't  
sound like he's high enough that he's got the CISO's ear now, nor  
that he's face-down in the weeds either. James, you sit somewhere  
in-between? James appears to work for an insurance company.  
Insurance companies do care about risk, but they're sometimes blind  
to the kinds (and magnitudes) of software risk their business  
faces. They fall in a middle ground between securities companies  
and banks.


Second, length: If you're going after a SVP or EVP, James, I'd keep  
the deck to ~3-5 slides. 1) Motivate the problem, 2) Show your  
org's. status (as an application security framework) and, 3) show  
the 6mo., 9mo., 12mo. (maybe) roadmap. Depending on the SVP,  
another two slides comparing you to others might work, as well as a  
slide that talks in more detail about costs, deliverables, and  
resource-requirements, and value.


Higher? I'd do two slides: 1) framework and 2) roadmap. The end.  
Place costs and value on the roadmap.
What about content? Longer decks I've seen (or helped create) have  
begun with research from analyst firms, or with pertinent  
headlines, to motivate the problem (couched as FUD if you're not  
careful) on slide one. Still, you'd be wise to pick fodder that  
will appear to the decision maker's own objectives. His/her  
objectives may be in pursuit of differentiation/opportunity or risk  
reduction, as Andrew said, or (more probably) they're pursuant to a  
more mundane goal: drive down (or hold constant) security cost  
while driving up the effectiveness of the spending.


To this end, the decks I've seen quickly moved beyond motivation  
into solution. Here, you have to begin thinking about your current  
org. See:


http://www.cigital.com/justiceleague/2007/02/22/keeping-up-with-the- 
jones-security-initiatives/


To summarize my entry, your organization probably didn't start  
thinking about software security yesterday, and they likely have  
something in place--even if it isn't to your satisfaction yet.  
Likewise, true strengths lurk, waiting to be leveraged. Out here in  
mailing-list-land, we can't be sure of specifics, but, I've got  
some premonitions. Insurance companies I've seen seem to mix small  
wild-wild-west (Developers cowboys 'follow' Agile as an excuse to  
just slam code without process) teams with those following a  
largely monolithic waterfall-like (regardless of how 'iterative'  
it's described) development process in their application portfolio.  
In either case, an in-project risk officer exists, but the function  
seems overshadowed by deadlines, features, and cost.


On the topic of the framework slide, you mentioned a _very_  
important quality: who, what, when structure. I wrote an IEEE S&P  
article on this topic long ago:


www.cigital.com/papers/download/j2bsi.pdf

but you can also look at my talk from OWASP's DC conference in '05  
on the same topic for slide help.


What about the roadmap--the way forward? Even if currently  
ineffective, current security items like an architectural review  
checklist present opportunity with which to start your roadmap.  
When working on your roadmap focus on how small iterative changes  
in existing elements (like that checklist) can save you on security  
effort (spending) later. Pick sure wins and to communicate value,  
show a metric that will demonstrate the savings. Propose  
measurements up front, if only verbally, as part of this  
presentation. For ins

[SC-L] Technology-specific Security Standards

2007-05-23 Thread John Steven
All,

My last two posts to Cigital's blog covered whether or not to build your
security standards specific to a technology-stack and code-centric or to be
more general about them:

http://www.cigital.com/justiceleague/2007/05/18/security-guidance-and-its-%e
2%80%9cspecificity-knob%e2%80%9d/

And

http://www.cigital.com/justiceleague/2007/05/21/how-to-write-good-security-g
uidance/

Dave posted a comment on the topic, which I'm quoting here:
-
Your point about the ³perishability² of such prescriptive checklists does
make the adoption of such a program fairly high maintenance. Nothing wrong
with that, but expectations should be set early that this would not be a
fire and forget type of program, but rather an ongoing investment.
-

I agree, specifying guidance at this level does take a lot more effort; you
get what you pay for eh? I responded in turn with a comment of my own. I've
seen some organizations control this cost effectively and still get value:

See:
http://www.cigital.com/justiceleague/2007/05/18/security-guidance-and-its-%e
2%80%9cspecificity-knob%e2%80%9d/#comment-1048

Some people think my stand controversial...

What do you guys think?

----
John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Really dumb questions?

2007-08-30 Thread John Steven
James,

Not dumb questions: an unfortunate situation. I do tool bakeoffs for clients a 
fair amount. I'm responsible for the rules Cigital initially sold to Fortify. I 
also attempt to work closely with companies like Coverity and understand deeply 
the underpinnings of that tool's engine. I've a fair amount of experience with 
Klocwork, unfortunately less with Ounce.

I understand the situation like this: technical guys at each of these companies 
are all great guys, smart, and understand their tool's capabilities and focus. 
They accurately describe what their tool does and don't misrepresent it.

On the other hand, I've experienced competition bashing in the sales process as 
I've helped companies with tool selections and bake offs. I see NO value in 
this. As I said in a previous post to this list: the tools differ both 
macroscopically in terms of approach and microscopically in terms rule 
implementation. Please see my previous post about bake-offs and such if you'd 
like more information on how to disambiguate tool capabilities objectively.

No blanket statement about quality or security fits any vendors' tool; ANY 
vendor. Ignore this level of commentary by the vendors.*(1)

No boolean answer exists to your question, let me give you some of my 
experiences:


 *   Fortify's SCA possesses far-and-away the largest rule set, covering both 
topics people consider purely security and those that may-or-may-not create 
opportunity for exploit (often when combined with other factors) which one may 
call quality rules. My impression is that SCA can be effectively used by 
Security Folk, QA Folk, or developers with a mind to improve the quality or 
security of their code. Recent inclusion of Findbugs bolsters SCA's 
capabilities to give code quality commentary.


 *   Coverity's Prevent often gets pigeon-holed as "a quality tool", but does 
an exceptional job of tracking down memory issues in C, C++. Skilled security 
consultants will tell you that failing to fix Prevent's results in your code 
will result in various memory-based command injection opportunities (BO, format 
string, write-anywhere's, etc.). It also effectively targets time-and-state 
issues, as well as other classes of bug. Prevent can effectively be used by 
Security Folk and Developers (or your rare hardcore QA person) to improve code 
quality and squelch opportunity for exploit.


 *   Klocwork's tool targets rule spaces similar to Fortify, but possesses 
less. Often pegged as a quality tool (as well), do find its UI (more than its 
engine) possess helpful features that only a QA professional would enjoy. This 
includes its defect density calculation, "reverse engineering" capabilities, 
and its reporting/time-series style. Klocwork can be effectively used by a 
Security guy to find security bugs, but I believe Fortify and Ounce have 
widened the rules gap in recent years.

Tackling your other questions in rapid succession:

There is no difference, technically, between the ability to scan for quality or 
security. However, each engine focuses on parsing and keeping track of only 
that state which provides meaningful data to their rules. You can imagine that 
Fortify carries a fair amount of information about where data came from and 
what functions may be dangerous and can therefore support new security rules 
easily. They don't carry around information to aggregate defect density readily 
like K7 can. Does this make one intrinsically better than the other for quality 
or security? Perhaps having worked on static analysis tools I'm cranky but I 
say, "No." If the market clearly mandated something specifically, all the 
vendors would augment their engine to support it. Some would be in a better 
position to offer it than others.

When I talk to vendors about COBOL and similar support they shudder. I think 
this space represents a huge opportunity for the first vendor to support it, 
but as a commercial organization, I wouldn't hold your breath on near-term 
support.

I could answer how these tools support new languages, but that doesn't seem 
like public domain knowledge. I'll let the vendors tackle that 'un.


John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


*(1) I'm also explicitly dodging the quality vs. security debate here. Having 
read/posted to this list for the last 7 years, that semi-annual topic has been 
flogged more than your average dead equine.



From: "McGovern, James F (HTSC, IT)" <[EMAIL PROTECTED]>

Most recently, we have met with a variety of vendors includin

Re: [SC-L] quick question - SXSW

2008-03-14 Thread John Steven
All,

I just got back from SD West where I spoke twice in the security track. My 
third year working this show I was shocked to find larger audiences, avid 
participation, and (what excited me the most) very clueful development types.

Awareness will continue to be a big part of "getting the word out there". But 
what Gunnar attempted to do with his track at QCon was excellent and we should 
learn from it. He 1) organized a set of talks that followed each other clearly, 
building on previous content and 2) focused on more intermediate or advanced 
content.

Too often, the security talks at conferences overlap. Even this year's SD West 
had two threat modeling talks and a secure design talk. I'm also sick of their 
patronizing structure and titles: "Top 10 Web Vulnerabilities". Smart 
developers interested in learning this stuff can avail themselves of strong web 
tutorials from a variety of sources at this point. Overlapping talks comprised 
mostly of top ten lists leave developers with the empty "So what do I do about 
it?" feeling.

At SD West, I positioned my two talks as "advanced". I laughed looking at the 
conference board. I personally accounted for about half of the advanced talks 
for the conference.  My "Static Analysis Tool Customization" talk generated 
great discussion. I was pleased. Almost every audience member worked for an 
organization that was piloting or had already adopted a tool. They had really 
used it, and crashed against a rock. Because experience varied (Coverity, 
KLocwork, Fortify, and Ounce experience all represented) we got to talk about 
more than just one tool. Comparison was very demonstrative. People took copious 
notes, stayed after, discussion continued.

Yes, we still need more awareness but people want more advanced talks. They're 
ready.

At SD Best, I'm working to modernize the curriculum. I'm working with the 
development track leads to make sure that things cohere. Rather than mixing 
old-school buffer overflow information, with web security, with some process 
help, with some tool demos, I'm going to try to organize instruction around 
some of the newer stuff that developers are beginning to play with and be 
excited about. We'll focus on web services and web 2.0. In my mind, teaching 
people to "think destructively" is important, but brining it back around and 
showing what to do about vulnerabilities is hugely important at a dev. 
conference. Last year I pushed speakers in this track to give constructive 
advice. I'll do the same this year.

Whether we're speaking to security guys or developers, it's time to show people 
patterns and approaches that will help them solve the problems we've been 
talking about for years.

Sum: Modernize advice. Talk to people in the languages and frameworks that 
they're using now. Get practical and constructive. Teach people how to build it 
right. Move beyond awareness to intermediate and advanced topics. It's time to 
raise the bar.


John Steven
Technical Director; Principal, Software Security Group
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Gunnar Peterson [EMAIL 
PROTECTED]

I agree this is a big issue, there is no cotton picking way that the
security people are solving these problems, it has to come from the
developers. I put together a track for QCon which included Brian Chess
on Static Analysis, John Steven on Threat Modeling, and Jeff Williams on
ESAPI and Web 2.0 security. The presentations were great, the audience
was engaged and enthusiastic but small; it turns that it is hard to
compete with the likes of Martin Fowler, Joshua Bloch, and Richard
Gabriel. Even when what they are talking about is some nth level
refinement and what we are talking about is all the gaping holes in the
previous a-m refinements and how to close some of them.

http://jaoo.dk/sanfrancisco/tracks/show_track.jsp?trackOID=73

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Language agnostic secure coding guidelines/standards?

2008-11-13 Thread John Steven
All,

James McGovern hits the core issue with his post, though I'm not sure how many 
organizations are self-aware enough to realize it. In practice, his 
philosophical quandary plays out through a few key questions. Do I:

1) Write technology-specific best-practices or security policy?
2) Couch my standards as "do not" or "do"?
3) Cull best practices from what people do, or set a bar and drive people 
towards compliance?
4) Spend money on training, or a tool roll-out?

See:
http://risiko.cigital.com/justiceleague/2007/05/25/a-mini-architecture-for-security-guidance/
http://risiko.cigital.com/justiceleague/2007/05/21/how-to-write-good-security-guidance/
http://risiko.cigital.com/justiceleague/2007/05/18/security-guidance-and-its-%e2%80%9cspecificity-knob%e2%80%9d/

Though old, these posts still seem to help.

More recently, this argument has most frequently taken the form of "language 
specific guidance or agnostic security guidance?". this has begun to play out 
in Andrew's post quoted below. Though there's tremendous value in agnostic 
guidance (especially because it applies well to languages for which specific 
guidance or tool support doesn't yet exist, and because it withstands time's 
test slightly better). But, what OWASP has documented is a false victory for 
the proponents of agnostic guidance--citing  its language independence. It, 
like any decent guidance, IS technology-specific, just not on any particular 
language. It's closely coupled to both the current web-technology stack as well 
as a penetration-testing approach (though, frankly that is fine). Move outside 
of either and you're going to find the guidance wanting. Saying the OWASP 
guidance is better than language-specific guidance is like getting caught in 
the rabbit hole of Java's "single language compiled to a virtual !
 machine that runs anywhere" vs. .NETs "many languages compiled to a single 
format that runs one place."

High-minded thought about whether or not one should proceed from the top down 
(from a strong but impractical to apply) governance initiative or from the 
bottom-up from a base of core scanning capabilities afforded by a security tool 
has won me little progress. it's frustrating and I give up. We needed a 
breakthrough, and we've gotten it:

As a result, we've built a tool chain that allows us/our clients to rapidly 
implement automated checks whether they have a static analysis tool, rely on 
penetration testing, or desire to implement their security testing as part of a 
broader QA effort. The 'rub' is that we've stayed technology-specific (to the 
Java EE platform)--so all the appropriate limitations apply... but recently we 
were able to deploy the static analysis piece of this puzzle (which we call our 
Assessment Factory) and automate 55% of a corporation's (rather extensive) 
security standards for that stack in 12mhrs. That's ridiculous (in a good way).

So, in my mind, the key is to get specific and do it quickly. Deciding whether 
or not to get language or technology-stack specific is a red-herring argument. 
The question should be: are you going to implement your automation with dynamic 
testing tools, static analysis tools, or say, a requirements management tool 
such as Archer.

If you're going the dynamic route, focus on technology-specific guidance. 
Download the OWASP security testing guide. Conduct a gap analysis on the guide: 
what can you automate with your existing test harness? If you don't have a 
harness, download Selenium. Once the gap analysis is done: get to work 
automating iteratively.

If you're going the static route: focus on language-specific guidance. Begin 
customizing your tool to find vulnerable constructs in your architectural 
idiom, and to detect non-compliance to your corporate standards/policy.

It's really not as bad as it can seem. You just have to remember you won't 
achieve 100% coverage in the first month. Though, any seasoned QA professional 
will tell you--expecting to is ludicrous.


John Steven
Senior Director; Advanced Technology Consulting
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Andrew van der Stock

The OWASP materials are fairly language neutral. The closest document
to your current requirements is the Developer Guide.

I am also developing a coding standard for Owasp with a likely
deliverable date next year. I am looking for volunteers to help with
it, so if you want a document that exactly meets your needs ... Please
join us!

On Nov 12, 2008, at 19:21, "Pete Werner" <[EMAI

Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist (informIT)

2009-03-19 Thread John Steven
Steve,

You saw my talk at the OWASP assurance day. There was a brief diversion about 
the number of "business logic" problems and "design flaws" (coarsely lumped 
together in my chart). That 'weight' should indicate that-at least in the 
subset of clients I deal with-flaws aren't getting short-shrift.

http://www.owasp.org/images/9/9e/Maturing_Assessment_through_SA.ppt (for those 
who didn't see it)

You may also want to look at my OWASP NoVA chapter presentation on "why" we 
believe Top N lists are bad... It's not so much a rant as it is a set of 
limitations in ONLY taking at Top N approach, and a set of constructive steps 
forward to improve one's practices:

http://www.owasp.org/images/d/df/Moving_Beyond_Top_N_Lists.ppt.zip

I cover how one should cause their own organization-specific Top N list to 
emerge and how to manage it once it does.


John Steven
Senior Director; Advanced Technology Consulting
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.




On 3/18/09 6:14 PM, "Steven M. Christey"  wrote:



On Wed, 18 Mar 2009, Gary McGraw wrote:

> Many of the top N lists we encountered were developed through the
> consistent use of static analysis tools.

Interesting.  Does this mean that their top N lists are less likely to
include design flaws?  (though they would be covered under various other
BSIMM activities).

> After looking at millions of lines of code (sometimes constantly), a
> ***real*** top N list of bugs emerges for an organization.  Eradicating
> number one is an obvious priority.  Training can help.  New number
> one...lather, rinse, repeat.

I believe this is reflected in public CVE data.  Take a look at the bugs
that are being reported for, say, Microsoft or major Linux vendors or most
any product with a long history, and their current number 1's are not the
same as the number 1's of the past.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist (informIT)

2009-03-20 Thread John Steven
 their total findings, out 
of the box
 *   An initial false positive rate as high as 75-99% from a static analysis 
tool, without tuning
 *   Less than 09% code coverage (by even shallow coverage metrics) from 
pen-testing tools

Qualitatively, I can tell you that I expect an overwhelming majority of static 
analysis results produced in an organization to come from customization of 
their adopted product.

Simply: if you base your  world view on only those things a tool (any tool) 
produces, you're world view is as narrow as a Neo-con's-and will prove about as 
ineffective. The same is true of those who narrow their scope to the OWASP 
Top-10 or the SANS Top 25.

[Top N Redux]
Some have left the impression that starting with a Top N list is of no use. 
Please  don't think I'm in this camp.  In my last two presentations I've 
indicated, "If you're starting from scratch these lists (or lists intrinsically 
baked into a tool's capabilities for detection) are a great place to start." 
and if you can't afford frequent industry interaction-use Top N lists as a 
proxy for it. They're valuable, but like anything, only to a point.

For me, this discussion will remain circular until we think about it in terms 
of measured, iterative organizational improvement. Why? Because when an 
organization focuses on getting beyond a "Top N" list it will just create their 
own organization-specific "Top N" list :-) If they're smart though, they'll 
call it a dash board and vie for a promotion ;-)

>From the other side? People building Top N lists know they're not a panacea, 
>but also know that a lot of organizations simply can't stomach the kind of 
>emotional investment that bsimm (and the ilk) come with.

This leaves me with the following:

[Conclusions]
Top N lists are neither necessary nor sufficient for organization success
Top N lists are necessary but not sufficient for industry success
Maturity models are neither necessary nor sufficient for organizational success
Maturity models are necessary but not sufficient for industry success

Always avail yourself of what the industry produces;
Never confine yourself to a single industry artifact dogmatically;
Whatever you consume from industry, improve it by making it your own;
Where-ever your are in your journey, continue to improve iteratively.

[Related Perennial Rabbit Holes] (bonus)
Bugs vs. Flaws: John Steven'06 - 
http://www.mail-archive.com/sc-l@securecoding.org/msg00888.html
Security Vs. Quality: Cowan '02 - http://www.securityfocus.com/archive/98/304766


John Steven
Senior Director; Advanced Technology Consulting
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


On 3/19/09 7:28 PM, "Benjamin Tomhave"  wrote:

Why are we differentiating between "software" and "security" bugs? It
seems to me that all bugs are software bugs, and how quickly they're
tackled is a matter of prioritizing the work based on severity, impact,
and ease of resolution. It seems to me that, while it is problematic
that security testing has been excluded historically, our goal should
not be to establish yet another security-as-bolt-on state, but rather
leapfrog to the desired end-state where QA testing includes security
testing as well as functional testing. In fact, one could even argue
that security testing IS functional testing, but anyway...

If you're going to innovate, you must as well jump the curve*.

-ben

* see Kawasaki "Art of Innovation"
http://blog.guykawasaki.com/2007/06/art_of_innovati.html

Gary McGraw wrote:
> Aloha Jim,
>
> I agree that security bugs should not necessarily take precedence
> over other bugs.  Most of the initiatives that we observed cycled ALL
> security bugs into the standard bug tracking system (most of which
> rank bugs by some kind of severity rating).  Many initiatives put
> more weight on security bugs...note the term "weight" not "drop
> everything and run around only working on security."  See the CMVM
> practice activities for more.
>
> The BSIMM helps to measure and then evolve a software security
> initiative.  The top N security bugs activity is one of an arsenal of
> tools built and used by the SSG to strategically guide the rest of
> their software security initiative.  Making this a "top N bugs of any
> kind" list might make sense for some organizations, but is not
> something we would likely observe by studying the SSG and the
> software security initiative.  Perhaps we suffer from the "looking
> for the keys under the streetlight" problem.
>
> gem
>
>
> On 3/19/09 2:

Re: [SC-L] BSIMM: Confessions of a Software SecurityAlchemist(informIT)

2009-03-25 Thread John Steven
time as assessors argue with development over semantics, next steps, and 
responsibilities.

Each methodology has its own limitations in this department, resulting from its 
focus and perspective, IMO. If you look at OSSTM, there's a wealth of 
definition around activities, which really helps those implementing it 
differentiate what techniques they could apply in testing their system. Their 
template reporting form falls short on defining constructs such as root cause 
and finding and 'speaks' like an auditor's report. This doesn't do the depth 
and breadth of their assessing techniques justice which means, ultimately, 
adopting it will take a lot of work in the realm of that normalization task we 
treated earlier. NIST's methodology formalized controls even more producing the 
800-53 publication. I need to look at their recent foray into app sec and 
reconsider ASVS much more closely and for much longer to make judgments in this 
realm. Currently, I've only considered it in the insanely and unfairly narrow 
context of "a set of stuff to look for". I'll follow up with you on t!
 his later this week or next.

[Correlating Risk Systems]
Taking your question literally: Risk systems? Most risk management companies 
wield Powerpoint and Excel, and as such, glue is hard to come by--let alone 
'open glue'. I don't have much experience with Archer, but their glue is 
proprietary but their suite includes the ability to weave together policy, 
requirements, findings, and change/bug management. It sits outside the MS 
Office stack, but what little experience I've had with it wasn't necessarily 
positive  ;-)

I hope this answers your questions... if not, fire more away,
-jOHN


From: Jim Manico [...@manico.net]

I like where your head is at - great list.

Regarding:

> Builds adapters so that bugs are automatically entered in tracking systems

Does the industry have:

1) A standard schema for findings, root causes, vulnerabilities, etc, and
the inter-relation of these key terms (and others?)
2) Standardized API's for allowing different risk systems for correlate this
data?

Or is it, right now, mostly proprietary glue? Curious...

Also, how do you build adaptors so that manual processes are automatically
entered in a tracking system? Are you just talking about content management
ststems to make it easy to manual reviewers to enter data into rosk
mangement software?

Anyhow, I like where your head is at and it definately got me thinking.

 - Jim

- Original Message -
From: "Tom Brennan - OWASP" 
To: "John Steven" ; ;
"Benjamin Tomhave" ; "Secure Code
MailingList" 
Sent: Friday, March 20, 2009 10:37 AM
Subject: Re: [SC-L] BSIMM: Confessions of a Software
SecurityAlchemist(informIT)


> John Stevens for Cyber Czar!
>
> I have "Elect J.Stevens" bumper stickers printing, I retooled my Free
> Kevin sticker press.
>
> Well stated ;) have a great weekend!
>
> -Original Message-
> From: John Steven 
>
> Date: Fri, 20 Mar 2009 14:35:01
> To: Benjamin Tomhave; Secure Code
> MailingList
> Subject: Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist
> (informIT)
>
>
> Tom, Ben, All,
>
> I thought I'd offer more specifics in an attempt to clarify. I train
> people here to argue your position Ben: security vulnerabilities don't
> count unless they affect development.   To this end, we've specifically
> had success with the following approaches:
>
> [Integrate Assessment Practices]
>[What?]
> Wrap the assessment activities (both tool-based and manual techniques) in
> a process that:
>* Normalizes findings under a common reporting vocabulary and
> demonstrates impact
>* Include SAST, DAST, scanning, manual, out-sourced, & ALL findings
> producers in this framework
>* Anchors findings in either a developmental root cause or other
> software artifact:
>* Use Case, reqs, design, spec, etc.
>* Builds adaptors so that bugs are automatically entered in tracking
> systems
>* Adaptors should include both tool-based and manual findings
>* Calculates impact with an agreed-upon mechanism that rates security
> risk with other  factors:
>* Functional release criteria
>* Other non-security non-functional requirements
>
>[Realistic?]
> I believe so. Cigital's more junior consultants work on these very tasks,
> and they don't require an early-adopter to fund or agree to them.  There's
> plenty of tooling out there to help with the adapters and plenty of
> presentations/papers on risk (http://www.riskanalys.is), normalizing
> findings ( http://cwe.mitre.org/.) , and assessment methodology
> (http://www.cigital

Re: [SC-L] IBM Acquires Ounce Labs, Inc.

2009-07-29 Thread John Steven
All,

The question of "Is my answer going to be high-enough resolution to support 
manual review?" or "...to support a developer fixing the problem?" comes down 
to "it depends".  And, as we all know, I simply can't resist an "it depends" 
kind of subtlety.

Yes, Jim, if you're doing a pure JavaSE application, and you don't care about 
non-standards compilers (jikes, gcj, etc.), then the source and the binary are 
largely equivalent (at least in terms of resolution) Larry mentioned gcj.  
Ease of parsing, however, is a different story (for instance, actual 
dependencies are way easier to pull out of a binary than the source code, 
whereas stack-local variable names are easiest in source).

Where you care about "a whole web application" rather than a pure-Java module, 
you have to concern yourself with JSP and all the other MVC technologies. 
Placing aside the topic of XML-based configuration files, you'll want to know 
what (container) your JSPs were compiled to target. In this case, source code 
is different than binary. Similar factors sneak themselves in across the Java 
platform.

Then you've got the world of Aspect Oriented programming. Spring and a broader 
class of packages that use AspectJ to weave code into your application will 
dramatically change the face of your binary. To get the same resolution out of 
your source code, you must in essence 'apply' those point cuts yourself... 
Getting binary-quality resolution from source code  therefore means predicting 
what transforms will occur at what point-cut locations. I doubt highly any 
source-based approach will get this thoroughly correct.

Finally, from the perspective of dynamic analysis, one must consider the 
post-compiler transforms that occur. Java involves both JIT and Hotspot (using 
two hotspot compilers: client and server, each of which conducting different 
transforms), which neither binary nor source-code-based static analysis are 
likely to correctly predict or account for. The binary image that runs is 
simply not that which is fed to classloader.defineClass[] as a bytestream.

...and  (actually) finally, one of my favorite code-review techniques is to ask 
for both a .war/ear/jar file AND the source code. This almost invariable get's 
a double-take, but it's worth the trouble. How many times do you think a 
web.xml match between the two? What exposure might you report if they were  
identical? ... What might you test for If they're dramatically different?

Ah... Good times,

John Steven
Senior Director; Advanced Technology Consulting
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


On 7/28/09 4:36 PM, "ljknews"  wrote:

At 8:39 AM -1000 7/28/09, Jim Manico wrote:

> A quick note, in the Java world (obfuscation aside), the source and
> "binary" is really the same thing. The fact that Fortify analizes
> source and Veracode analizes class files is a fairly minor detail.

It seems to me that would only be true for those using a
Java bytecode engine, not those using a Java compiler that
creates machine code.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Static Vs. Binary

2009-07-30 Thread John Steven
Something occurred to me last night as I pondered where this discussion¹s
tendrils are taking us.

An point I only made implicitly is this: The question wrote:

> All,
> 
> The question of ³Is my answer going to be high-enough resolution to support
> manual review?² or ³...to support a developer fixing the problem?² comes down
> to ³it depends².  And, as we all know, I simply can¹t resist an ³it depends²
> kind of subtlety.
> 
> Yes, Jim, if you¹re doing a pure JavaSE application, and you don¹t care about
> non-standards compilers (jikes, gcj, etc.), then the source and the binary are
> largely equivalent (at least in terms of resolution) Larry mentioned gcj.
> Ease of parsing, however, is a different story (for instance, actual
> dependencies are way easier to pull out of a binary than the source code,
> whereas stack-local variable names are easiest in source).
> 
> Where you care about ³a whole web application² rather than a pure-Java module,
> you have to concern yourself with JSP and all the other MVC technologies.
> Placing aside the topic of XML-based configuration files, you¹ll want to know
> what (container) your JSPs were compiled to target. In this case, source code
> is different than binary. Similar factors sneak themselves in across the Java
> platform. 
> 
> Then you¹ve got the world of Aspect Oriented programming. Spring and a broader
> class of packages that use AspectJ to weave code into your application will
> dramatically change the face of your binary. To get the same resolution out of
> your source code, you must in essence Oapply¹ those point cuts yourself...
> Getting binary-quality resolution from source code  therefore means predicting
> what transforms will occur at what point-cut locations. I doubt highly any
> source-based approach will get this thoroughly correct.
> 
> Finally, from the perspective of dynamic analysis, one must consider the
> post-compiler transforms that occur. Java involves both JIT and Hotspot (using
> two hotspot compilers: client and server, each of which conducting different
> transforms), which neither binary nor source-code-based static analysis are
> likely to correctly predict or account for. The binary image that runs is
> simply not that which is fed to classloader.defineClass[] as a bytestream.
> 
> ...and  (actually) finally, one of my favorite code-review techniques is to
> ask for both a .war/ear/jar file AND the source code. This almost invariable
> get¹s a double-take, but it¹s worth the trouble. How many times do you think a
> web.xml match between the two? What exposure might you report if they were
> identical? ... What might you test for If they¹re dramatically different?
> 
> Ah... Good times,
>  
> John Steven 
> Senior Director; Advanced Technology Consulting
> Direct: (703) 404-5726 Cell: (703) 727-4034
> Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908
> 
> Blog: http://www.cigital.com/justiceleague
> Papers: http://www.cigital.com/papers/jsteven
> 
> http://www.cigital.com
> Software Confidence. Achieved.
> 
> 
> On 7/28/09 4:36 PM, "ljknews"  wrote:
> 
>> At 8:39 AM -1000 7/28/09, Jim Manico wrote:
>> 
>>> A quick note, in the Java world (obfuscation aside), the source and
>>> "binary" is really the same thing. The fact that Fortify analizes
>>> source and Veracode analizes class files is a fairly minor detail.
>> 
>> It seems to me that would only be true for those using a
>> Java bytecode engine, not those using a Java compiler that
>> creates machine code.


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Static Vs. Binary

2009-08-04 Thread John Steven
Pravir,

HA!  :D

(Knowing me, you can predict what I’m about to say)

YES,  explaining what the tools will need to do correctly as they continue 
their next-generation isn’t useful to a practitioner on this list today.

 ...

But, it is very important  to understand-as a practitioner-what your tools 
aren’t taking into account accurately; many organizations do little else than 
triage and report on tool results. For instance, when a particular tools says 
it supports a technology (Such as Spring, or SpringMVC) what does that mean? 
Weekly, our consultants  augment a list of things the [commercial tool they’re 
using that day] doesn’t do because it doesn’t ‘see’ a config file, a property, 
some aspect that would have been present in the binary, (even the source code) 
etc...

I’ll accept that my advice being targeted at the tool vendors themselves isn’t 
very useful by consumers of this list (it is for your new company though eh?), 
but I think it is important as a security practitioner, if you’re building an 
assurance program within your org., to understand what the tools/techniques 
you’re finding (or disproving) the existence of within your applications’ code 
bases. This will include what their notion of ‘binary’ is increasingly, as list 
participants begin to consume vendor SAAS static analysis of binary services.


John Steven
Senior Director; Advanced Technology Consulting
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


On 7/30/09 10:57 PM, "Pravir Chandra"  wrote:

First, I generally agree that there are many factors that make the true and 
factual fidelity of static analysis really REALLY difficult.

However, I submit that by debating this point, you're belaboring the correct 
angle of survivable Neptunian atmospheric entry with people that don't 
generally value the benefit of flying humans past the moon.

The point being, if you're debating the minutiae of static analysis vis-a-vis 
compile time optimizations, you're convincing people to let good be the enemy 
of perfect. There are few (if any) perfect technologies, but we use them 
because they're needed and provide a ton of great value. Anyone who doubts this 
should glance at the device you're reading this on and imagine yourself 
refusing to use it because it doesn't have perfect security (or reliability, or 
usability, etc.).

-Original Message-
From: John Steven 

Something occurred to me last night as I pondered where this discussion¹s
tendrils are taking us.

An point I only made implicitly is this: The question wrote:

> All,
>
> The question of ³Is my answer going to be high-enough resolution to support
> manual review?² or ³...to support a developer fixing the problem?² comes down
> to ³it depends².  And, as we all know, I simply can¹t resist an ³it depends²
> kind of subtlety.
>
> Yes, Jim, if you¹re doing a pure JavaSE application, and you don¹t care about
> non-standards compilers (jikes, gcj, etc.), then the source and the binary are
> largely equivalent (at least in terms of resolution) Larry mentioned gcj.
> Ease of parsing, however, is a different story (for instance, actual
> dependencies are way easier to pull out of a binary than the source code,
> whereas stack-local variable names are easiest in source).
>
> Where you care about ³a whole web application² rather than a pure-Java module,
> you have to concern yourself with JSP and all the other MVC technologies.
> Placing aside the topic of XML-based configuration files, you¹ll want to know
> what (container) your JSPs were compiled to target. In this case, source code
> is different than binary. Similar factors sneak themselves in across the Java
> platform.
>
> Then you¹ve got the world of Aspect Oriented programming. Spring and a broader
> class of packages that use AspectJ to weave code into your application will
> dramatically change the face of your binary. To get the same resolution out of
> your source code, you must in essence Oapply¹ those point cuts yourself...
> Getting binary-quality resolution from source code  therefore means predicting
> what transforms will occur at what point-cut locations. I doubt highly any
> source-based approach will get this thoroughly correct.
>
> Finally, from the perspective of dynamic analysis, one must consider the
> post-compiler transforms that occur. Java involves both JIT and Hotspot (using
> two hotspot compilers: client and server, each of which conducting different
> transforms), which neither binary nor source-code-based static analysis are
> likely to correctly predict or account for. The binary image that runs is
> simply not that which is fed t