[SC-L] OWASP webappsec mailing list

2006-10-10 Thread Jeff Williams








Hi,



Id like to invite you to join (or rejoin) the OWASP
webappsec mailing list. We started this mailing list almost 5 years ago and it
has spawned great discussion of application security issues. Were moving
the list from its current home to a server controlled by OWASP. This will allow
us to provide the high quality moderation the list deserves.




 
  
  
  You can join (or rejoin) us on the webappsec list by clicking here
  
  
  http://lists.owasp.org/mailman/listinfo/webappsec
  
  
 




If you havent visited OWASP in a while, please come
check out whats going on. The OWASP standard tools, like WebScarab and WebGoat have all been improving
steadily over time. And we have tons of new projects, content, and tools,
including:



-
OWASP AJAX Security Project - investigating
security of AJAX
enabled applications 

-
OWASP CAL9000 Project - a _javascript_
based web application security testing suite 

-
OWASP Code Review Project - a
new project to capture best practices for reviewing code 

-
OWASP Honeycomb Project - a guide
to the building blocks of application security 

-
OWASP LAPSE Project - an Eclipse-based
source static analysis tool for Java 

-
OWASP Live CD Project - a CD will
application security analysis and testing tools 

-
OWASP Orizon Project - a flexible
code review engine 

-
OWASP Pantera Web
Assessment Studio Project - a hybrid testing approach 

-
OWASP PHP Project - helping PHP
developers build secure applications 

-
OWASP Java Project - helping Java and
J2EE developers build secure applications 

-
OWASP SQLiX Project - a full
perl-based SQL scanner 

-
OWASP Testing Project - application
security testing procedures and checklists 

-
OWASP Validation Project - a
project that provides guidance and tools related to validation.



As always, OWASP is free and open for everyone. Please
forward this message to anyone who is interested in application security.
Thanks for your support.



--Jeff



Jeff Williams, Chair

The OWASP Foundation



Dedicated to finding and fighting the causes of
insecure software








___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] A New Open Source Approach to Weakness

2006-08-11 Thread Jeff Williams
We're familiar with the CWE project and there's a lot of overlap between
our vulnerabilities - not surprising given that most came from the same
sources.  Where possible we're trying to keep the same names.  We've
found that some of the topics are really attacks, and have organized
them accordingly.  One of the really great things that CWE has done is
providing links to actual CVE entries demonstrating each of the
vulnerabilities.

We started Honeycomb to:

 - create a complete library of application security building-blocks,
including principles, threats, attacks, vulnerabilities, and
countermeasures

 - enable the rich interconnection of those building-blocks in ways that
a strict one-dimensional taxonomy cannot allow

 - encourage security experts in the community to share their knowledge,
argue, edit, discuss, and resolve in wisdom of crowds fashion

--Jeff

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED]
Sent: Thursday, August 10, 2006 7:06 PM
To: sc-l@securecoding.org
Subject: [***SPAM (header)***] - Re: [SC-L] A New Open Source Approach
to Weakness - Email found in subject

The Honeycomb project seems interesting.  This sounds a lot like the
Common Weakness Enumeration (CWE - see http://cwe.mitre.org) effort that
has been going on for the past year as part of the DHS software
assurance
metrics and tool evaluation project.  The CWE is an aggregation of
sources
including Seven Pernicious Kingdoms, CLASP, PLOVER, ten from OWASP, the
Web Security Threat Classification, 19 Deadly Sins, etc. that describes
software weaknesses (to date ~500 of them) in a consistently named
fashion
and provides a taxonomy to organize the relationships between the
weaknesses.  The classification comes with the help of a large community
effort including NIST, MITRE, DHS, NSA, many commercial organizations,
academia, and the public.  And, I believe there are currently 15-20 tool
vendors, including Fortify Software and Secure Software, that are
contributing and mapping their content to the CWE.

Thanks,

Michael Gegick

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Re: [WEB SECURITY] On sandboxes, and why you should care

2006-05-27 Thread Jeff Williams

Dinis Cruz wrote:
 If you do accept that it is possible to build such sandboxes, then we
 need to move to the next interesting discussion, which is the 'HOW'
 
 Namely, HOW can an environment be created where the development and
 deployment of such Sandboxes makes business sense.

It's the business sense part of this that's really difficult.  It wouldn't
be *that* hard to put sandbox enforcement into all libraries.  If you want
to protect against XSS, put a validation and encoding sandbox into
HttpServletRequest.  If you want to stop SQL injection, get rid of
non-PreparedStatement and build in some control for direct references.  As
long as there are no unmanaged calls (and assuming type-safety, etc...) then
all calls can be mediated by a sandbox.

But the complexity of configuring the sandbox is the hard part.  You're
trying to move the security enforcement out of the code and into something
else. So you need a language that allows the developer to specify all those
rules.  And if the sandbox is powerful enough to only allow exactly what the
developer specifically wants to allow (positive security model), the
language will have to be just as complex as the code it's sandboxing. 

The Java sandbox is already too complex for most developers to use. I've
tangled with it several times and come away only partially accomplishing
what I wanted.  (And uncovering a massive flaw in one vendor's custom
sandbox implementation). 

This complexity is a general sandbox problem, not specific to Java or .NET
or anything else.  The most hopeless I've worked with is the Compartmented
Mode Workstation (CMW) label encodings and permissions scheme.  The web
application firewall products also have this problem.  Even .htaccess files
are generally a mess.  It's just a TON of work to move security rules out of
the code and into something else.  And developers don't want to learn some
new language to do it.

So while it might be possible to create sandboxes that are far more
powerful, the complexity goes through the roof.  And we can't even get
developers to use the relatively simple policy file for the Java sandbox.
If anything, I think we should focus on the big easy wins, like Microsoft
did by adding (some) XSS protection for .NET apps.  But the configuration
has to be really easy -- like ON/OFF.

--Jeff



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] Re: On sandboxes, and why you should care

2006-05-27 Thread Jeff Williams
I don't really see this as two approaches.  At one end of the spectrum, you
can do security inside your code. But there are many underlying sandboxes
that restrict what your code is able to do. Each of these may be
configurable to be 'semi-permeable' for your app.

My opinion is people should locate the security where it's most effective
and easiest to build, configure, maintain, and operate. For example, you
could enforce access to local files in your code, in the VM, or in the OS.
Where is best?  Depends on what you're doing and your environment.  The
important point is that everyone involved understands the approach and that
it's simple to verify.

I'm confused about how your fix the tools approach is any different from a
sandbox. Building APIs that prevent developers from making security mistakes
is just the same.

And there's another important use of sandboxes that I think you're ignoring.
When you have to run code that you didn't develop, the end-user can use the
sandbox to protect their underlying platform. It's a question of control.

--Jeff

 -Original Message-
 From: Brian Eaton [mailto:[EMAIL PROTECTED]
 Sent: Friday, May 26, 2006 10:54 AM
 To: [EMAIL PROTECTED]
 Cc: Dinis Cruz; Stephen de Vries; Secure Coding Mailing List; owasp-
 [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: [WEB SECURITY] RE: [SC-L] Re: [WEB SECURITY] On sandboxes,
 and why you should care
 
 On 5/26/06, Jeff Williams [EMAIL PROTECTED] wrote:
 
  Dinis Cruz wrote:
   If you do accept that it is possible to build such sandboxes, then we
   need to move to the next interesting discussion, which is the 'HOW'
  
   Namely, HOW can an environment be created where the development and
   deployment of such Sandboxes makes business sense.
 
  It's the business sense part of this that's really difficult.  It
wouldn't
  be *that* hard to put sandbox enforcement into all libraries.  If you
want
  to protect against XSS, put a validation and encoding sandbox into
  HttpServletRequest.  If you want to stop SQL injection, get rid of
  non-PreparedStatement and build in some control for direct references.
 
 Two distinct approaches to fixing software are described here.
 
 With one method, sandboxes, developers need to do a bunch of extra
 work to define sandbox policies for their applications.  Sandboxes
 don't have a great track record because it is too much work to do them
 properly.
 
 With the other method, better tools and APIs, developers do less work
 and get better results.  The reason buffer overflows are relatively
 rare in web applications is because web applications aren't usually
 written in C.  They are written in high level languages that do bounds
 checking automatically.  XSS is endemic in web applications because
 the tool sets encourage people to generate HTML on the fly.  The
 evidence is clear: fix the tools and you'll end up with more secure
 apps.
 
 Sandboxes are best understood as band-aids on buggy, broken
 applications.  Band-aids have a place, but avoiding the errors in the
 first place is more effective.
 
 Regards,
 Brian

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-11 Thread Jeff Williams
Stephen de Vries wrote:
 With application servers such as Tomcat, WebLogic etc, I think we have a
 special case in that they don't run with the verifier enabled - yet they
 appear to be safe from type confusion attacks.  (If you check the
 startup scripts, there's no mention of running with -verify).

You're right -- I checked that too.  So I think it's just too simple to talk
about the verifier being either on or off.  It appears to me that the
verifier can be enabled for some code and not for other code.  I think
you're right that this behavior has something to do with the classloader
that is used, but I'd really like to understand exactly what the rules are.

--Jeff


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-03 Thread Jeff Williams








Two important clarifications for Java
(based on my experiments):



1) The verifier IS enabled for the classes
that come with the Java platform, such as those in rt.jar. So, for
example, if you create a class that tries to set System.security (the private variable
that points to the SecurityManager instance), you get a verification exception.
(If this was possible, it would allow a complete bypass of the Java sandbox).



2) The verifier also seems to be enabled
for classes running inside Tomcat. Im not sure about other J2EE
containers.



So I dont think its fair to
say that most Java code is running without verification.



But Denis is right. There is a real
problem with verification, as demonstrated in the message below. This is
a clear violation of the Java VM Spec, yet my messages to the team at Sun
developing the new verifier have been ignored. And its a real
issue, given the number of applications that rely on libraries they didnt
compile. I dont think a real explanation of how the Sun verifier actually
works is too much to ask, given the risk.





--Jeff















From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Dinis Cruz
Sent: Tuesday, May 02, 2006 7:48
PM
To: 'Secure Coding Mailing List'
Cc:
'[EMAIL PROTECTED]'
Subject: [SC-L] By default, the
Verifier is disabled on .Net and Java 





Here is a more detailed
explanation of why (in my previous post) I said: 99% of .Net and Java code that is currently deployed is executed
on an environment where the VM verifier is disabled, .

--

In .Net the verifier (the CLR function that checks for type safety) is only
enabled on partial trust .Net environments.

For example, in Full Trust .Net you can successfully assign Type A to Type B
(also called a Type Confusion attack) which clearly breaks type safety.

I have done some research on this topic, and on my spare time I was able to
find several examples of these situations:


 Possible Type Confusion issue in .Net 1.1 (only
 works in FullTrust) (http://owasp.net/blogs/dinis_cruz/archive/2005/11/08/36.aspx)
 Another Full Trust CLR Verification issue:
 Exploiting Passing Reference Types by Reference (http://owasp.net/blogs/dinis_cruz/archive/2005/12/28/393.aspx)
 Another Full Trust CLR Verification issue:
 Changing Private Field using Proxy Struct (http://owasp.net/blogs/dinis_cruz/archive/2005/12/28/394.aspx)
 Another Full Trust CLR Verification issue:
 changing the Method Parameters order (http://owasp.net/blogs/dinis_cruz/archive/2005/12/26/390.aspx)
 C# readonly modifier is not enforced by the CLR
 (when in Full Trust (http://owasp.net/blogs/dinis_cruz/archive/2005/12/26/390.aspx)
 Also related: 



 
  JIT prevents short
  overflow (and PeVerify doesn't catch it) (http://owasp.net/blogs/dinis_cruz/archive/2006/01/10/422.aspx)
   
  and ANSI/UNICODE
  bug in System.Net.HttpListenerRequest (http://www.owasp.net//blogs/dinis_cruz/archive/2005/12/17/349.aspx)
 


Here is Microsoft's 'on the record' comment about this
lack of verification (and enforcement of type safety) on Full Trust code (note:
I received these comments via the MSRC):

...
Some people have argued that Microsoft should always enforce type safety
at runtime (i.e. run the verifier) even if code is Fully Trusted.
We've chosen not to do this for a number of reasons (e.g. historical,
perf, etc). There are at least two important things to consider about
this scenario:

1) Even if we tried to enforce type safety using the verifier for Fully
Trusted code, it wouldn't prevent Fully Trusted from accomplishing the
same thing in 100 other different ways. In other words, your example
accessed an object as if it were a different incompatible type - The
verifier could have caught this particular technique that allowed him to
violate type safety. However, he could have accomplished the same
result using private reflection, direct memory access with unsafe code,
or indirectly doing stuff like using PInvoke/native code to disable
verification by modifying the CLR's verification code either on disk or
in memory. There would be a marginal benefit to insuring people wrote
cleaner more type safe code by enforcing verification
at runtime for
Full Trust, but you wouldn't get any additional security benefits
because you can perform unverifiable actions in dozens of ways the
verifier won't prevent if you are Fully Trusted.

2) As mentioned at the end of #1 above, one argument is that it's good
for programmers (even fully trusted ones) to follow type safety rules,
and doing runtime verification would keep people writing cleaner code.
However, we don't need to do the verification at runtime in order
to
encourage good type safety hygiene. Instead, we can rely on our
languages to do this for us. For example, C# and VB by default ensure
that you produce verifiable code. If you've written your code in a
language like C#, you're not going to run into cases where you've
accidentally created 

[SC-L] Re: 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in100% Managed Verifiable code

2006-03-29 Thread Jeff Williams
 Jeff, as you can see by Stephen de Vries's response on this thread,
 you are wrong in your assumption that most Java code (since 1.2)
 must go through the Verifier (this is what I was sure it was
 happening since I remembered reading that most Java code executed
 in real-world applications is not verified)

Wow.  I ran some tests too, and Stephen is absolutely right.  It appears
that Sun quietly turned off verification by default for bytecode loaded from
the local disk (not applets).  They've apparently
(http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4030988), acknowledged
that it is a bug, and said that it will not be fixed.  The change had
something to do with compatibility with old bytecode.  More details
(http://www.cafeaulait.org/reports/accessviolations.html) 

This is a clear violation of the JVM Spec. And (regardless of protestation
to the contrary) it IS a big security problem.  Just because bytecode is
loaded from the local disk does not mean it's trustworthy.  Every
application uses lots of libraries that developers download from the
Internet (as compiled jar files) and loaded from the local disk.  Unless you
run with java -verify that code won't get verified.

I'm sure that the percentage of applications that are running with both
verification and sandbox is terrifyingly small.  Probably only applets and
maybe Java Web Start applications.  As I mentioned before some of the J2EE
servers are now enabling a sandbox, but their security policies are
generally wide open.

I think there are two relatively easy things we can do here. First, let's
find out what plans Sun has for the new verifier -- we should strongly
encourage them to turn it on by default.  Second, we can work on ways to
encourage people to use sandboxes -- tools, articles, and awareness.

--Jeff



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] ZDNET: LAMP lights the way in open-source security

2006-03-07 Thread Jeff Williams
I'm a strong advocate of static analysis, but drawing conclusions about
overall security based only on these tools is just silly.  Even ignoring the
scripting language problem, these tools simply aren't even looking for many
of the types of problems that cause the most serious risks.  They're great
for assisting a code review or indicating potential design flaws, but not a
great ruler.  At least not yet.

--Jeff

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On
 Behalf Of Gavin, Michael
 Sent: Tuesday, March 07, 2006 12:46 PM
 To: Jeremy Epstein; Kenneth R. van Wyk; Secure Coding Mailing List
 Subject: RE: [SC-L] ZDNET: LAMP lights the way in open-source security
 
 Yeah, statistics can allow you to say and prove just about anything.
 
 OK, showing my ignorance here, since I haven't checked out any of the
 LAMP source trees and reviewed the code: how much of the code making up
 those modules is written in scripting languages vs. how much of it is
 written in C, C++ (and how much, if any, is written in any other
 compiled languages)?
 
 If the LAMP source code itself is primarily C/C++, then arguably, the
 results are somewhat interesting, though I think they would be much more
 interesting if this DISA project was set up to test the open source code
 with a number of commercial scanners instead of just the Coverity
 scanner, then we could at least compare the merits of various scanning
 techniques and implementations. In this case, the distinction to me is
 that they have tested the LAMP platform code, not the code that people
 write on top of it for their applications, and are making some
 statements about the software security of the LAMP platform compared to
 the rest of the open source code they scanned.
 
 If on the other hand, a significant portion of the LAMP code base itself
 is made up of scripting language code, then I agree with you, the
 results aren't terribly useful to anyone other than possibly Coverity
 and Stanford. Note: significant is open to interpretation, but doesn't
 have to be large; 10 or 15 per cent would seem significant enough to me.
 
 -Original Message-
 From: Jeremy Epstein [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, March 07, 2006 12:17 PM
 To: Gavin, Michael; Kenneth R. van Wyk; Secure Coding Mailing List
 Subject: RE: [SC-L] ZDNET: LAMP lights the way in open-source security
 
 All of which proves that there are lies, damn lies, and statistics (the
 statistic being the lower bug density, which ignores the most
 potentially
 vulnerable parts of the system).
 
  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] On Behalf Of Gavin, Michael
  Sent: Tuesday, March 07, 2006 11:49 AM
  To: Kenneth R. van Wyk; Secure Coding Mailing List
  Subject: RE: [SC-L] ZDNET: LAMP lights the way in open-source
  security
 
  The Coverity product (Coverity Prevent) is a static source
  code analysis tool for C and C++, see
  http://www.coverity.com/library/pdf/coverity_prevent.pdf.
 
  It isn't actually scanning (or if it is, it isn't analyzing)
  any of the scripting code, as far I as can tell.
 
  Michael
 
  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] On Behalf Of Kenneth R. van Wyk
  Sent: Tuesday, March 07, 2006 10:56 AM
  To: Secure Coding Mailing List
  Subject: [SC-L] ZDNET: LAMP lights the way in open-source security
 
  Interesting article out on ZDNet today:
 
  http://www.zdnetasia.com/news/security/0,39044215,39315781,00.htm
 
  The article refers to the US government sponsored study being
  done by Stanford University, Symantec, and Coverity.  It
  says, The so-called LAMP stack of open-source software has a
  lower bug density--the number of bugs per thousand lines of
  code--than a baseline of 32 open-source projects analyzed,
  Coverity, a maker of code analysis tools, announced Monday.
 
  This surprised me quite a bit, especially given LAMP's
  popular reliance on scripting languages PHP, Perl, and/or
  Python.  Still, the article doesn't discuss any of the root
  causes of the claimed security strengths in LAMP-based code.
  Perhaps it's because the scripting languages tend to make
  things less complex for the coders (as opposed to more
  complex higher level languages like Java and C#/.NET)?  Opinions?
 
  Cheers,
 
  Ken
  --
  Kenneth R. van Wyk
  KRvW Associates, LLC
  http://www.KRvW.com
 
 
  ___
  Secure Coding mailing list (SC-L)
  SC-L@securecoding.org
  List information, subscriptions, etc -
  http://krvw.com/mailman/listinfo/sc-l
  List charter available at -
  http://www.securecoding.org/list/charter.php
 
  ___
  Secure Coding mailing list (SC-L)
  SC-L@securecoding.org
  List information, subscriptions, etc -
  http://krvw.com/mailman/listinfo/sc-l
  List charter available at -
  http://www.securecoding.org/list/charter.php
 
 
 

RE: [SC-L] Bugs and flaws

2006-02-07 Thread Jeff Williams
I'm not sure which of the three definitions in Brian's message you're not
concurring with, but I think he was only listing them as strawmen anyway.

In any case, there's no reason that static analysis tools shouldn't be able
to find errors of omission. We use our tools to find these 'dogs that didn't
bark' every day.

The tools can identify, for example, places where logging, input validation,
and error handling should have been done. With a little work teaching the
tool about your application, assets, and libraries, it's easy to find places
where encryption, access control, and authentication should have been done
but haven't.

In your hypothetical, if the API isn't ever invoked with an identity and a
secret, there can't be authentication. If there's no call to an access
control component, we know at least that there's no centralized mechanism.
In this case, the tool could check whether the code follows the project's
standard access control pattern. If not, it's an error of omission.

If I remember correctly, Saltzer and Schroeder only suggested 8 principles.
Your hypo is closest to complete mediation, but touches on several others.
But, in theory, there's no reason that static analysis can't help verify all
of them in an application.

--Jeff

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Gary McGraw
Sent: Monday, February 06, 2006 11:13 PM
To: Brian Chess; sc-l@securecoding.org
Subject: RE: [SC-L] Bugs and flaws

Hi all,

I'm afraid I don't concur with this definition.  Here's a (rather vague)
flaw example that may help clarify what I mean.  Think about an error of
omission where an API is exposed with no AA protection whatsoever.  This
API may have been designed not to have been exposed originally, but somehow
became exposed only over time.

How do you find errors of omission with a static analysis tool?  

This is only one of salzer and schroeder's principles in action.  What of
the other 9?

gem

P.s. Five points to whoever names the principle in question.

P.p.s. The book is out www.swsec.com

 -Original Message-
From:   Brian Chess [mailto:[EMAIL PROTECTED]
Sent:   Sat Feb 04 00:56:16 2006
To: sc-l@securecoding.org
Subject:RE: [SC-L] Bugs and flaws

The best definition for flaw and bug I've heard so far is that a flaw is
a successful implementation of your intent, while a bug is unintentional.  I
think I've also heard a bug is small, a flaw is big, but that definition
is awfully squishy.

If the difference between a bug and a flaw is indeed one of intent, then I
don't think it's a useful distinction.  Intent rarely brings with it other
dependable characteristics.

I've also heard bugs are things that a static analysis tool can find, but
I don't think that really captures it either.  For example, it's easy for a
static analysis tool to point out that the following Java statement implies
that the program is using weak cryptography:

SecretKey key = KeyGenerator.getInstance(DES).generateKey();

Brian

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] RE: The role static analysis tools play in uncoveringelements of design

2006-02-06 Thread Jeff Williams
Brian,

³Show me places in the program where property X holds²

Yes. That's it exactly. Current tools can answer this type of question to
some extent, but they're not really designed for it. The interaction
contemplated by most of the tools is more like show me the line of code the
vulnerability is on. This doesn't really help verify the security of an
application and doesn't work at the design level. The property X approach
does both.

 aiding in program understanding, it needs to allow you to easily
 add new rules of your own construction.

This is absolutely critical. In addition to creating new rules, we need to
be able to tag custom libraries and methods with their security
properties. This will allow existing rules to be applied in new contexts.
E.g. tagging a custom validation method with untaint so existing data
validation rules now include it.

 Whether or not you want to see this path depends on how important
 it really is to you that encryption is absolutely never bypassed.
 Your tolerance for noise is dictated by the level of assurance
 you require.

Absolutely. The encryption example demonstrates your point well. Still, I
wouldn't want anyone to get the impression that there's a direct
relationship between the signal-to-noise setting on the tool and the level
of assurance one gets in an application. This is because the tools tend to
find the problems that are easiest for them to find, not the ones that
represent the biggest risk.

For example, access control problems in web applications are difficult to
find automatically, because the implementations are generally complex and
distributed across a software baseline. So even if I only want a typical
commercial level of assurance in a web application, I have to turn up the
volume on the tools all the way. And even that might not make them visible.

--Jeff




___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] RE: The role static analysis tools play in uncovering elements of design

2006-02-04 Thread Jeff Williams
I think there's a lot more that static analysis can do than what you're
describing. They're not (necessarily) just fancy pattern matchers.

Static analysis can add security meta-information to a software baseline. If
the tool knows which methods are related to which security mechanisms, it
can help you find, navigate, and understand their design. The tools help me
generate a security 'view' of a software baseline.

Does the application do encryption? Is it centralized? What algorithms are
used? What data flows are affected? Are there any paths around the
encryption? Where are the keys stored? Is there proper error handling and
logging for the encryption mechanism? Static analysis tools make answering
all these questions easier.

Today's static analysis tools are only starting to help here. Tools focused
on dumping out a list of vulnerabilities don't work well for me. Too many
false alarms.  Maybe that's what you meant by 'inhibit'.

--Jeff
 
Jeff Williams, CEO
Aspect Security
http://www.aspectsecurity.com
email: [EMAIL PROTECTED]
phone: 410-707-1487
 

From: John Steven [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 03, 2006 1:40 PM
To: Jeff Williams; Secure Coding Mailing List
Subject: The role static analysis tools play in uncovering elements of
design 

Jeff,

An unpopular opinion I’ve held is that static analysis tools, while very
helpful in finding problems, inhibit a reviewer’s ability to find collect as
much information about the structure, flow, and idiom of code’s design as
the reviewer might find if he/she spelunks the code manually.

I find it difficult to use tools other than source code navigators (source
insight) and scripts to facilitate my code understanding (at the
design-level). 

Perhaps you can give some examples of static analysis library/tool use that
overcomes my prejudice—or are you referring to the navigator tools as well?

-
John Steven   
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

  
snipped
Static analysis tools can help a lot here. Used properly, they can provide
design-level insight into a software baseline. The huge advantage is that
it's correct.

--Jeff 
snipped

This electronic message transmission contains information that may be
confidential or privileged. The information contained herein is intended
solely for the recipient and use by any other party is not authorized. If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited. If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message. Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-02 Thread Jeff Williams
At the risk of piling on here, there's no question that it's critical to
consider security problems across the continuum. While we're at it, the
analysis should start back even further with the requirements or even the
whole system concept.

All of the representations across the continuum (rqmts, arch, design, code)
are just models of the same thing.  They start more abstract and end up as
code.  A *single* problem could exist in all these models at the same time.

Higher-level representations of systems are generally eclipsed by lower
level ones fairly rapidly.  For example, it's a rare group that updates
their design docs as implementation progresses. So once you've got code, the
architecture-flaws don't come from architecture documents (which lie). The
best place to look for them (if you want truth) is to look in the code.

To me, the important thing here is to give software teams good advice about
the level of effort they're going to have to put into fixing a problem. If
it helps to give a security problem a label to let them know they're going
to have to go back to the drawing board, I think saying 'architecture-flaw'
or 'design-flaw' is fine. But I agree with others that saying 'flaw' alone
doesn't help distinguish it from 'bug' in the minds of most developers or
architects.

--Jeff

Jeff Williams, CEO
Aspect Security
http://www.aspectsecurity.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Crispin Cowan
Sent: Wednesday, February 01, 2006 5:07 PM
To: John Steven
Cc: Will Kruse; Secure Coding Mailing List
Subject: Re: [SC-L] Bugs and flaws

John Steven wrote:
 I'm not sure there's any value in discussing this minutia further, but
here
 goes:
   
We'll let the moderator decide that :)

 1) Crispin, I think you've nailed one thing. The continuum from:

 Architecture -- Design -- Low-level Design -- (to) Implementation

 is a blurry one, and certainly slippery as you move from 'left' to
'right'.
   
Cool.

 But, we all should understand that there's commensurate blur in our
analysis
 techniques (aka architecture and code review) to assure that as we sweep
 over software that we uncover both bugs and architectural flaws.
   
Also agreed.

 2) Flaws are different in important ways bugs when it comes to
presentation,
 prioritization, and mitigation. Let's explore by physical analog first.
   
I disagree with the word usage. To me, bug and flaw are exactly
synonyms. The distinction being drawn here is between implementation
flaws vs. design flaws. You are just creating confusing jargon to
claim that flaw is somehow more abstract than bug. Flaw ::= defect
::= bug. A vulnerability is a special subset of flaws/defects/bugs that
has the property of being exploitable.

 I nearly fell through one of my consultant's tables as I leaned on it this
 morning. We explored: Bug or flaw?.
   
The wording issue aside, at the implementation level you try to
code/implement to prevent flaws, by doing things such as using higher
quality steel (for bolts) and good coding practices (for software). At
the design level, you try to design so as to *mask* flaws by avoiding
single points of failure, doing things such as using 2 bolts (for
tables) and using access controls to limit privilege escalation (for
software).

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-02 Thread Jeff Williams
Um, so if there is no documentation you can't find design flaws?

--Jeff

-Original Message-
From: Gary McGraw [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 02, 2006 8:50 PM
To: Jeff Williams; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws

I'm sorry, but it is just not possible to find design flaws by staring at
code.

gem

 -Original Message-
From:   Jeff Williams [mailto:[EMAIL PROTECTED]
Sent:   Thu Feb 02 20:32:29 2006
To: 'Secure Coding Mailing List'
Subject:RE: [SC-L] Bugs and flaws

At the risk of piling on here, there's no question that it's critical to
consider security problems across the continuum. While we're at it, the
analysis should start back even further with the requirements or even the
whole system concept.

All of the representations across the continuum (rqmts, arch, design, code)
are just models of the same thing.  They start more abstract and end up as
code.  A *single* problem could exist in all these models at the same time.

Higher-level representations of systems are generally eclipsed by lower
level ones fairly rapidly.  For example, it's a rare group that updates
their design docs as implementation progresses. So once you've got code, the
architecture-flaws don't come from architecture documents (which lie). The
best place to look for them (if you want truth) is to look in the code.

To me, the important thing here is to give software teams good advice about
the level of effort they're going to have to put into fixing a problem. If
it helps to give a security problem a label to let them know they're going
to have to go back to the drawing board, I think saying 'architecture-flaw'
or 'design-flaw' is fine. But I agree with others that saying 'flaw' alone
doesn't help distinguish it from 'bug' in the minds of most developers or
architects.

--Jeff

Jeff Williams, CEO
Aspect Security
http://www.aspectsecurity.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Crispin Cowan
Sent: Wednesday, February 01, 2006 5:07 PM
To: John Steven
Cc: Will Kruse; Secure Coding Mailing List
Subject: Re: [SC-L] Bugs and flaws

John Steven wrote:
 I'm not sure there's any value in discussing this minutia further, but
here
 goes:
   
We'll let the moderator decide that :)

 1) Crispin, I think you've nailed one thing. The continuum from:

 Architecture -- Design -- Low-level Design -- (to) Implementation

 is a blurry one, and certainly slippery as you move from 'left' to
'right'.
   
Cool.

 But, we all should understand that there's commensurate blur in our
analysis
 techniques (aka architecture and code review) to assure that as we sweep
 over software that we uncover both bugs and architectural flaws.
   
Also agreed.

 2) Flaws are different in important ways bugs when it comes to
presentation,
 prioritization, and mitigation. Let's explore by physical analog first.
   
I disagree with the word usage. To me, bug and flaw are exactly
synonyms. The distinction being drawn here is between implementation
flaws vs. design flaws. You are just creating confusing jargon to
claim that flaw is somehow more abstract than bug. Flaw ::= defect
::= bug. A vulnerability is a special subset of flaws/defects/bugs that
has the property of being exploitable.

 I nearly fell through one of my consultant's tables as I leaned on it this
 morning. We explored: Bug or flaw?.
   
The wording issue aside, at the implementation level you try to
code/implement to prevent flaws, by doing things such as using higher
quality steel (for bolts) and good coding practices (for software). At
the design level, you try to design so as to *mask* flaws by avoiding
single points of failure, doing things such as using 2 bolts (for
tables) and using access controls to limit privilege escalation (for
software).

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying

RE: [SC-L] Bugs and flaws

2006-02-02 Thread Jeff Williams
That's not my experience. I believe there are many design problems you can
find more quickly and, more importantly, accurately by using the code. I
find this to be true even when there is a documented design -- but there's
no question in the case where all you have is code.

In fact, if the design isn't fairly obvious in the code, then that's a
security problem in itself. Unless it's clear, developers won't understand
it and will make more mistakes.

Static analysis tools can help a lot here. Used properly, they can provide
design-level insight into a software baseline. The huge advantage is that
it's correct.

--Jeff 

-Original Message-
From: Gary McGraw [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 02, 2006 9:06 PM
To: [EMAIL PROTECTED]; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws

Not unless you talk to the designer.  You might get lucky and find a design
problem or two by looking at code, but that usually doesn't work.

That's not to say that all systems have adequate documentation about design
(not to mention requirements that you correctly cited before)!  They don't.
When they don't, you have to try to construct them.  Doing them from code is
very difficult at best.

gem

 -Original Message-
From:   Jeff Williams [mailto:[EMAIL PROTECTED]
Sent:   Thu Feb 02 20:59:14 2006
To: Gary McGraw; 'Secure Coding Mailing List'
Subject:RE: [SC-L] Bugs and flaws

Um, so if there is no documentation you can't find design flaws?

--Jeff

-Original Message-
From: Gary McGraw [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 02, 2006 8:50 PM
To: Jeff Williams; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws

I'm sorry, but it is just not possible to find design flaws by staring at
code.

gem

 -Original Message-
From:   Jeff Williams [mailto:[EMAIL PROTECTED]
Sent:   Thu Feb 02 20:32:29 2006
To: 'Secure Coding Mailing List'
Subject:RE: [SC-L] Bugs and flaws

At the risk of piling on here, there's no question that it's critical to
consider security problems across the continuum. While we're at it, the
analysis should start back even further with the requirements or even the
whole system concept.

All of the representations across the continuum (rqmts, arch, design, code)
are just models of the same thing.  They start more abstract and end up as
code.  A *single* problem could exist in all these models at the same time.

Higher-level representations of systems are generally eclipsed by lower
level ones fairly rapidly.  For example, it's a rare group that updates
their design docs as implementation progresses. So once you've got code, the
architecture-flaws don't come from architecture documents (which lie). The
best place to look for them (if you want truth) is to look in the code.

To me, the important thing here is to give software teams good advice about
the level of effort they're going to have to put into fixing a problem. If
it helps to give a security problem a label to let them know they're going
to have to go back to the drawing board, I think saying 'architecture-flaw'
or 'design-flaw' is fine. But I agree with others that saying 'flaw' alone
doesn't help distinguish it from 'bug' in the minds of most developers or
architects.

--Jeff

Jeff Williams, CEO
Aspect Security
http://www.aspectsecurity.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Crispin Cowan
Sent: Wednesday, February 01, 2006 5:07 PM
To: John Steven
Cc: Will Kruse; Secure Coding Mailing List
Subject: Re: [SC-L] Bugs and flaws

John Steven wrote:
 I'm not sure there's any value in discussing this minutia further, but
here
 goes:
   
We'll let the moderator decide that :)

 1) Crispin, I think you've nailed one thing. The continuum from:

 Architecture -- Design -- Low-level Design -- (to) Implementation

 is a blurry one, and certainly slippery as you move from 'left' to
'right'.
   
Cool.

 But, we all should understand that there's commensurate blur in our
analysis
 techniques (aka architecture and code review) to assure that as we sweep
 over software that we uncover both bugs and architectural flaws.
   
Also agreed.

 2) Flaws are different in important ways bugs when it comes to
presentation,
 prioritization, and mitigation. Let's explore by physical analog first.
   
I disagree with the word usage. To me, bug and flaw are exactly
synonyms. The distinction being drawn here is between implementation
flaws vs. design flaws. You are just creating confusing jargon to
claim that flaw is somehow more abstract than bug. Flaw ::= defect
::= bug. A vulnerability is a special subset of flaws/defects/bugs that
has the property of being exploitable.

 I nearly fell through one of my consultant's tables as I leaned on it this
 morning. We explored: Bug or flaw?.
   
The wording issue aside, at the implementation level you try to
code/implement to prevent flaws, by doing things such as using higher
quality

[SC-L] ANN: WebGoat 3.7 - Application Security hands-on learning environment

2005-09-09 Thread Jeff Williams
The *only* way to learn application security is to test applications 
hands on and examine their source code. To encourage the next 
generation of application security experts, the Open Web Application 
Security Project (OWASP) has developed an extensive lesson-based 
training environment called WebGoat.

WebGoat is a lessons based, deliberately insecure web application 
designed to teach web application security. Each of the 25 lessons 
provides the user an opportunity to demonstrate their understanding by 
exploiting a real vulnerability. WebGoat provides the ability to examine 
the underlying code to gain a better understanding of the vulnerability 
as well as provide runtime hints to assist in solving each lesson. V3.7 
includes lessons covering most of the OWASP Top Ten vulnerabilities and 
contains several new lessons on web services, SQL Injection, and 
authentication.

WebGoat 3.7 is available for free download from:

http://www.owasp.org/software/webgoat.html

Simply unzip, run, and go to WebGoat in your browser to start learning.

The OWASP Foundation is dedicated to finding and fighting the causes of 
insecure software. Find out more at http://www.owasp.org.

--Jeff 





Re: [SC-L] Why Software Will Continue to Be Vulnerable

2005-05-01 Thread Jeff Williams
What really mystifies me is the anlogy to fire insurance. *Everyone*
keeps their fire insurance up to date, it costs money, and it protects
against a very rare event that most fire insurance customers have never
experienced. What is it that makes consumers exercise prudent good
sense for fire insurance, but not in selecting software?
Fire safety is physical, not tremendously complicated, and we have tons of 
actuarial data. Software security, on the other hand, is extremely difficult 
for anyone to measure -- it takes a lot of effort, even with the most 
advanced tools and knowledge.

So there's no way for anyone to tell which software is secure.  Many vendors 
make dramatically inflated claims about their product's security features 
and rarely get called on them.  For example, there are dozens of vendors 
claiming that their technology solves the OWASP Top Ten -- which is 
ridiculous.

Anyway, it's not surprising to me that consumers aren't seeking out 
security.  Or that vendors aren't providing it for that matter.  In my 
opinion, the market is broken because of asymmetric information, and it will 
never work until we find ways to make security more visible to everyone.

I did a talk on this at the NSA High Confidence Software and Solutions 
conference a few weeks back.  The slides are here 
http://www.aspectsecurity.com/documents/Aspect_HCSS_Brief.ppt.

--Jeff
Jeff Williams
Aspect Security, Inc.
www.aspectsecurity.com


Re: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-06 Thread Jeff Williams
I would think this might work, but I - if I ran a software development
company - would be very scared about signing that contract... Even if
I did everything right, who's to say I might not get blamed? Anyway,
insurance would end up being the solution.
What you *should* be scared of is a contract that's silent about security. 
Courts will have to interpret (make stuff up) to figure out what the two 
parties intended.  I strongly suspect courts will read in terms like the 
software shall not have obvious security holes.  They will probably rely on 
documents like the OWASP Top Ten to establish a baseline for trade practice.

Contracts protect both sides.  Have the discussion.  Check out the OWASP 
Software Security Contract Annex for a 
template.(http://www.owasp.org/documentation/legal.html).

--Jeff

- Original Message -
From: Michael Silk [EMAIL PROTECTED]
To: Kenneth R. van Wyk [EMAIL PROTECTED]
Cc: Secure Coding Mailing List SC-L@securecoding.org
Sent: Wednesday, April 06, 2005 9:40 AM
Subject: Re: [SC-L] Application Insecurity --- Who is at Fault?
 Quoting from the article:
 ''You can't really blame the developers,''

 I couldn't disagree more with that ...

 It's completely the developers fault (and managers). 'Security' isn't
 something that should be thought of as an 'extra' or an 'added bonus'
 in an application. Typically it's just about programming _correctly_!

 The article says it's a 'communal' problem (i.e: consumers should
 _ask_ for secure software!). This isn't exactly true, and not really
 fair. Insecure software or secure software can exist without
 consumers. They don't matter. It's all about the programmers. The
 problem is they are allowed to get away with their crappy programming
 habits - and that is the fault of management, not consumers, for
 allowing 'security' to be thought of as something seperate from
 'programming'.

 Consumers can't be punished and blamed, they are just trying to get
 something done - word processing, emailing, whatever. They don't need
 to - nor should. really. - care about lower-level security in the
 applications they buy. The programmers should just get it right, and
 managers need to get a clue about what is acceptable 'programming' and
 what isn't.

 Just my opinion, anyway.

 -- Michael


 On Apr 6, 2005 5:15 AM, Kenneth R. van Wyk [EMAIL PROTECTED] wrote:
 Greetings++,

 Another interesting article this morning, this time from 
 eSecurityPlanet.
 (Full disclosure: I'm one of their columnists.)  The article, by 
 Melissa
 Bleasdale and available at
 http://www.esecurityplanet.com/trends/article.php/3495431, is on the
 general
 state of application security in today's market.  Not a whole lot of 
 new
 material there for SC-L readers, but it's still nice to see the 
 software
 security message getting out to more and more people.

 Cheers,

 Ken van Wyk
 --
 KRvW Associates, LLC
 http://www.KRvW.com








[SC-L] Secure software development contract annex

2005-02-22 Thread Jeff Williams
Hi,
I'd love to get this list's feedback on a new document from OWASP.
OWASP Secure Software Development Contract Annex 
(http://www.owasp.org/documentation/legal.html)

Everyone involved with a software contracting relationship of any kind, even 
within a single application team, should have a discussion about security. 
This document is a *starting point* and is intended to facilitate that 
discussion.

Please let the team know if this document is helpful, or if you don't like 
the model.  We're actively trying to improve the document.

--Jeff


Re: [SC-L] Programming languages used for security

2004-07-12 Thread Jeff Williams
 To get REALLY back to the point, I'd like to comment on Fabien's comment
 that In my opinion, it's the most important things for a languages,
 something to easily validate user input or to encrypt password are a must
 have.  Fabien is right, but increasingly that's only half the problem.
 There also needs to be something in the libraries for the language to
 securely store data that can't be one-way hashed, as are (inbound)
 passwords.  For example, if I need to store the password my application
 needs to authenticate to a database, or other critical data, it would be
 nice to have that built into the language libraries, instead of having to
 build it myself.  It would certainly reduce the number of programmers who
 build such storage mechanisms themselves, and insecurely at that.

I'm really glad to see this point raised.  I really have very little
interest in the which language debate, because most of the software I see
depends so heavily on *libraries*.  The real genius of Java in my opinion is
that they slapped a standard API on top of just about everything (graphics,
databases, networking, phone systems, microplatforms, crypto, and much
more). Some other languages have also been successful here in a somewhat
less standardized way.

But just slapping an API on something is not the same as making it easy to
use securely. Java's JCE is a perfect case in point - they encrypted the API
itself! ;-) To me, it's far more important that the libraries are easy to
use securely than language syntax stuff. So how do we encourage library
writers to write APIs that are easy to use securely?

I'd like to see libraries that force the developer to explicitly do
something special if they want to get around the default secure way of doing
things.  It's not enough to just include a bunch of security features into
the libraries.  I've seen far too many libraries that expose a very powerful
API and make it too easy for a developer to make security mistakes.

Does anyone have pointers to articles on designing API's so that they are
easy to use securely?

--Jeff

Jeff Williams
Aspect Security
http://www.aspectsecurity.com