Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-07 Thread Boberski, Michael [USA]
Regarding PKI, we travel in different circles when it comes to that, perhaps 
best to leave that one there.

Anywho... All sorts of apples and oranges are being mixed up here. There is the 
security of a targeted app, of the components in the environment that it 
depends on to run, of the environment itself, and of the whole mess when 
everything's up and running. ESAPI is intended to be used by the targeted app. 
The overall security will then be dragged up or down depending on the 
components in the environment and the operation and the development of the app 
in the first place. Different strategies and whatnot can then be applied to 
claw one's way back up to a targeted level of assurance. Your comments below 
span these areas, when we're really just trying to talk about one. Hooray for 
ESAPI for helping out with the piece of the puzzle that it can. You bet that 
we're going to explain and promote its use, as people who have contributed to 
its development and adoption.

The documentation and whatnot is being worked on. We could use a hand if you've 
got some cycles! I wrote a first draft of a design patterns guide not too long 
ago, please feel free to review and provide comments, it's on the project page. 
The project presentation also speaks to a number of the items below. Threat 
models, white papers, all good, if you (or anyone) wish to contribute!

Best,

Mike B.

-Original Message-
From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On 
Behalf Of John Steven
Sent: Thursday, January 07, 2010 1:03 PM
To: Secure Coding
Subject: Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security 
made simple ! | Core Security Patterns Weblog

Jim,

Yours was the predicted response. The ref-impl. to API side-step does not fix 
the flaw in the argument though.

No, you do not need "A" ESAPI to build secure apps.

Please re-read my email carefully.

Alternatives:
1) Some organizations adopt OWASP ESAPI's ref-impl.
2) Others build their own do agree and see the value; yes

#1 and #2 agree with your position.

3) Some secure their toolkits (again, "a la secure struts")

Indicating such a "secure struts" is an organization's ESAPI perverts the ESAPI 
concept far too greatly to pass muster. Indeed, were it to, it would violate 
properties 3 and 4 (and very likely 2) within my previous email's advantage 
list.

Mr. Boberski, you too need to re-read my email. I advise you strongly not to 
keep saying that ESAPI is "like PK-enabling" an APP. I don't think many people 
got a good feeling about how much they spent on, or how effective their PKI 
implementation was ;-). Please consider how you'd ESAPI-enable the millions of 
lines of underlying framework code beneath the app.

4) Policy + Standards, buttressed with a robust assurance program

Some organizations have sufficiently different threat models and deployment 
scenarios within their 'four walls' that they opt for specifying an overarching 
policy and checking each sub-organization's compliance--commensurate with their 
risk tolerance and each app deployment's threat model. Each sub-organization 
may-or-may-not choose to leverage items one and two from this list. I doubt, 
however, you'd argue that more formal methods of verification don't suffice to 
perform 'as well' as ESAPI in securing an app (BTW, I have seen commercial 
implementations opt for such verification as an alternative to a security 
toolkit approach). Indeed, an single security API would likely prove a 
disservice if crammed down the throats of sub-organizations that differ too 
greatly.

At best, the implicit "ESAPI or the highway" campaign slogan  applies to only 
50% of the alternatives I've listed. And since the ESAPI project doesn't have 
documented and publicly available good, specific, actionable requirements, 
mis-use cases, or a threat model from which it's working, the OWASP ESAPI 
project doesn't do as much as it could for the #2 option above.

Jim, Mike, I see your posts all-througout the the blog-o-sphere and mailing 
lists. Two-line posts demanding people adopt ESAPI or forgo all hope can 
off-put. It conjures close-minded religion to me. Rather:

* Consider all four of the options above, one might be better than OWASP ESAPI 
within the context of the post
* Consider my paragraph following Point #4. Create:

* An ESAPI mis-use case guide, back out security policy it manifests,
  or requirements it implements (and don't point me to the unit
  tests--I've read them)
* Document an ESAPI threat model (For which apps will developers have
  their expectations met adopting ESAPI? Which won't?)
* A document describing experiment results: before and after ESAPI:
  how many results does a pen-test find?, 'Code review?
* Write an adoption guide. Apps are only created in a green-field
  once. Then they live in maintenance forever. How do you apply
  ESAPI to a real-world app

Re: [SC-L] 2010 bug hits millions of Germans | World news | The Guardian

2010-01-07 Thread ljknews
At 2:37 PM -0600 1/7/10, Wall, Kevin wrote:
> Larry Kilgallen wrote...
> 
>> At 10:43 AM -0600 1/7/10, Stephen Craig Evans wrote:
>>
>> > I am VERY curious to learn how these happened... Only using the last
>> > digit of the year? Hard for me to believe. Maybe it's in a
>> single API
>> > and somebody tried to be too clever with some bit-shifting.
>>
>> My wife says that in the lead-up to the year 2000 she caught
>> some programmers "fixing" Y2K bugs by continuing to store
>> year numbers in two digits and then just prefixing output
>> with 19 if the value was greater than some two digit number
>> and prefixing output with 20 if the value was less than or
>> equal to that two digit number.
>>
>> Never underestimate programmer creativity.
>>
>> Never overestimate programmer precision.
> 
> While I never fixed any Y2K problems I worked next to someone
> who did for about 6 months. What you refer to is pretty much what
> I mentioned as the "fixed window" technique that was very common
> to those developers who were addressing the problems at the time.
> 
> IIRC, it was a particularly popular approach for those who waited until
> the last moment to address Y2K issues in there systems because it still
> allowed for 2 digit year fields in all their forms and databases and output.

Going back to the original Y2K issue, within the past 5 years
my wife and I visited a friend of my late father.  This friend
had retired as somewhat of a bigwig at an industrial giant that
formerly was in the business of manufacturing their own line of
computers.  He admitted that "back in the day" they had set up
things to use two digits for storing year numbers, knowing that
before the year 2000 came around, _they_ would all be retired.
-- 
Larry Kilgallen
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] 2010 bug hits millions of Germans | World news | The Guardian

2010-01-07 Thread Wall, Kevin
Larry Kilgallen wrote...

> At 10:43 AM -0600 1/7/10, Stephen Craig Evans wrote:
>
> > I am VERY curious to learn how these happened... Only using the last
> > digit of the year? Hard for me to believe. Maybe it's in a
> single API
> > and somebody tried to be too clever with some bit-shifting.
>
> My wife says that in the lead-up to the year 2000 she caught
> some programmers "fixing" Y2K bugs by continuing to store
> year numbers in two digits and then just prefixing output
> with 19 if the value was greater than some two digit number
> and prefixing output with 20 if the value was less than or
> equal to that two digit number.
>
> Never underestimate programmer creativity.
>
> Never overestimate programmer precision.

While I never fixed any Y2K problems I worked next to someone
who did for about 6 months. What you refer to is pretty much what
I mentioned as the "fixed window" technique that was very common
to those developers who were addressing the problems at the time.

IIRC, it was a particularly popular approach for those who waited until
the last moment to address Y2K issues in there systems because it still
allowed for 2 digit year fields in all their forms and databases and output.

---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] "Checklist Manifesto" applicability to software security

2010-01-07 Thread Gary McGraw
hi sc-l,

I am pretty sure that Brian Chess used to have this in his standard talk some 
many years ago.  Then again I am getting old.

Great analogy.  Note that checklists DO NOT take the place of the intensive 
care staff!

gem


On 1/7/10 10:11 AM, "Jeremy Epstein"  wrote:

Greetings,

I was listening yesterday to an interview [1] on NPR with Dr. Atul
Gawande, author of "Checklist Manifesto" [2].  He describes the
problem that medical procedures (e.g., surgery) tend to have lots of
mistakes, mostly caused because of leaving out important steps.  He
claims that 2/3 of medical - or maybe surgical - errors can be avoided
by use of checklists.  Checklists aren't very popular among doctors,
because they don't like to see themselves as factory workers following
a procedure, because the human body is extremely complex, and because
every patient is unique.

So as I was listening, I was thinking that many of the same things
could be said about software developers and problems with software
security - every piece of software is unique, any non-trivial piece of
software is amazingly complex, developers tend to consider themselves
as artists creating unique works, etc.

Has anyone looked into the parallelisms before?  If so, I'd be
interested in chatting (probably offlist) about your thoughts.

--Jeremy

[1] Listen to the interview at http://wamu.org/programs/dr/10/01/06.php#29280
[2] "The Checklist Manifesto: How to Get Things Right", Atul Gawande,
Metropolitan Books.
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] 2010 bug hits millions of Germans | World news | The Guardian

2010-01-07 Thread Wall, Kevin
Stephen Craig Evans wrote...

> Looks like there's another one:
>
> Symantec Y2K10 Date Stamp Bug Hits Endpoint Protection Manager
> http://www.eweek.com/c/a/Security/Symantec-Y2K10-Date-Stamp-Bu
g-Hits-Endpoint-Protection-Manager-472518/?> kc=EWKNLSTE01072010STR1
>
> I am VERY curious to learn how these happened... Only using the last
> digit of the year? Hard for me to believe. Maybe it's in a single API
> and somebody tried to be too clever with some bit-shifting.

Just speculation, but perhaps all these systems are using the "fixed window"
technique to address these two digit year fields common on credit cards.
Depending on the "pivot point" year that is chosen determines whether
a 2 digit year field belongs to one century or the other. This could
just be a carry over from the Y2K fixes and the rather poor choice for
a pivot point. I worked next to a person who did some Y2K fixes for
lots of mainframes back in 1998-99, and he said that using 'windowing'
to address this was a pretty common technique because companies did not
want to expand all their databases and forms, etc. to allow for 4 digits.

For example, if 1980 was chosen as the pivot year, then 2 digit years
80 through 99 would be assigned '1900' as the century and 00 through 79
would be assigned '2000' as the century. So perhaps 1910 was chosen as
the pivot year (if DOB was a consideration, that would not be all that
unreasonable) so that 10 through 99 is interpreted as 1900s and
00 through 09 was considered as 2000 something. So we hit 2010 and
a credit card has a 2 digit year for it's expiration or transaction
date or whatever, and all of a sudden 01/10 or 01/07/10 is interpreted
as 1910.

Usually using such a fixed windowing technique (there is also a sliding
window technique that was a more expensive "fix") was only considered a
stop-gap measure with most organizations fixing things for real before
the pivot year gave them trouble. But we all know about how good intentions
work...or not.

Anyhow, like I said, this is only a GUESS of what might be going on. I have
no hard data to back it up.

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
kevin.w...@qwest.comPhone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
- Edsger Dijkstra, How do we tell truths that matter?
  http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] 2010 bug hits millions of Germans | World news | The Guardian

2010-01-07 Thread Steven M. Christey


On Thu, 7 Jan 2010, Stephen Craig Evans wrote:


I am VERY curious to learn how these happened...


My name is Steve.  I had a 2010 problem.

An internal CVE support program was "hit" by this issue.  Fortunately, 
there weren't any fatal results and it was only an annoyance.  However: I 
had an input validation routine that did a sanity-check on dates, which I 
wrote sometime around 2005.  The check would generate a specific complaint 
if a date was 2010 or later since, after all, it was 2005 - a time when 
resources for development were extremely low - and it worked back then. 
(Now I'm starting to rationalize that all my bad practices back then were 
"Agile" instead of "cheap hacks."  Yes, that was deliberately 
inflammatory.)


The regexp to check the year was something like /^(199\d|200\d)$/, and the 
informative error message would say that the year portion of the date 
appeared to be invalid.  There was a separate check that also made sure 
that a given date wasn't in the future, so this message was basically a 
secondary bit of detail.


Anyway, 5 years passed and I forgot about the limitation of that routine 
until it started generating informational error messages when CVE team 
members submitted new CVE content.


One could say that this was under the radar of my threat model when it 
should have been part of the threat model for these major vendors, but it 
was still a known bug/feature that never got fixed until it had to be 
fixed.


I'm sure I have a few other date-sensitive dependencies that are not a 
high priority to fix, given current conditions and practices. I'll 
probably be close to retirement age come 2038 when the Unix year bug shows 
up.  If CVE is still around then, and my code is still being used, well, 
it's gonna be someone else's problem.


Anybody else willing to admit their 2010 mistakes and the conditions that 
led to them?  Or was it just me and a couple huge companies?


- Steve
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] 2010 bug hits millions of Germans | World news | The Guardian

2010-01-07 Thread ljknews
At 10:43 AM -0600 1/7/10, Stephen Craig Evans wrote:

> I am VERY curious to learn how these happened... Only using the last
> digit of the year? Hard for me to believe. Maybe it's in a single API
> and somebody tried to be too clever with some bit-shifting.

My wife says that in the lead-up to the year 2000 she caught
some programmers "fixing" Y2K bugs by continuing to store
year numbers in two digits and then just prefixing output
with 19 if the value was greater than some two digit number
and prefixing output with 20 if the value was less than or
equal to that two digit number.

Never underestimate programmer creativity.

Never overestimate programmer precision.
-- 
Larry Kilgallen
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-07 Thread Boberski, Michael [USA]
To expand upon "But you need "A" ESAPI for your organization" briefly, 

>From a certain point of view, just as application can be PK-enabled, they can 
>be ES-enabled. Instead of a PKI toolkit, one uses an Enterprise Security API 
>toolkit. Instead of signature functions, think input validation functions, 
>where security checks and effects are performed and result according to a 
>given organization's/application's security policy. Note that crypto should 
>also be wrapped, but trying not to overcomplicate things in this note. The 
>toolkit may be an OWASP version, or it may be one of any number of flavors 
>described below, where security functions are logically and/or physically 
>grouped into an organization/application-specific ESAPI. The OWASP ones just 
>happen to exist and are free. Then, developers are trained/instructed/etc. to 
>use them depending on their own life cycle best practices.

I have more to say, but let us start here.

Best,
 
Mike B.

-Original Message-
From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On 
Behalf Of Jim Manico
Sent: Thursday, January 07, 2010 10:56 AM
To: John Steven
Cc: Secure Coding
Subject: Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security 
made simple ! | Core Security Patterns Weblog

John,

You do not need OWASP ESAPI to secure an app. But you need "A" ESAPI for your 
organization in order to build secure Apps, in my opinion.  
OWASP ESAPI may help you get started down that path.

An ESAPI is no silver bullet, there is no such thing as that in AppSec. But it 
will help you build secure apps.

Jim Manico

On Jan 6, 2010, at 6:20 PM, John Steven  wrote:

> All,
>
> With due respect to those who work on ESAPI, Jim included, ESAPI is 
> not the only way "to make a secure app even remotely possible." And I 
> believe that underneath their own pride in what they've done--some of 
> which is very warranted--they understand that. It's hard not to become 
> impassioned in posting.
>
> I've seen plenty of good secure implementations within organizations' 
> own security toolkits. I'm not the only one that's
> noticed: the BSIMM SSF calls out three relevant activities to this
> end:
>
> SDF 1.1 Build/publish security features (*1) SDF 2.1 Find/publish 
> mature design patterns from the organization (similar URL) SDF 2.3  
> Build secure-by-design middleware frameworks/common libraries (similar 
> URL)
>
> Calling out three activities within the SSF means that it can't just 
> be "John Steven's top client" (whatever that means) that's doing this 
> either. I've formally reviewed some of these toolkits and I'd pit them 
> against ESAPI and expect favorable results. Plenty of organizations 
> are doing a great job building stuff on top of profoundly broken 
> platforms, frameworks, and toolkits... and they're following a 'secure 
> SDL' to get there. ESAPI can not be said to have adhered to that rigor 
> (*2). Organizations care about this risk regardless of the pedigree 
> and experience of those who are building it.
>
> Is the right answer for everyone to drop everything and build their 
> own secure toolkit? I don't think so. I like that the OWASP community 
> is taking a whack at something open and free to share.
> These same people have attempted to improve Java's security through 
> the community process--and though often correct, diligent, friendly, 
> and well-intentioned, their patience has often been tested to or 
> beyond the breaking point: those building the platforms and frameworks 
> simply aren't listening that well yet. That is very sad.
>
> One thing I've seen a lot of is organizations assessing, testing, 
> hardening, documenting, and internally distributing their own versions 
> of popular Java EE toolkits (the "secure struts"
> phenomenon). I've seen some organizations give their developers 
> training and write SAST rules to automatically verify correct use of 
> such toolkits. I like this idea a hell of a lot as an alternative to 
> an ESAPI-like approach. Why? A few reasons:
>
> 1) Popularity - these toolkits appeal to developers: their interfaces 
> have been "voted on" by their adopting user population-- not conceived 
> and lamented principally by security folk. No one forces developers to 
> go from Struts to Spring they do it because it saves them time, makes 
> their app faster, or some combination of important factors.
>
> 2) Changes App Infrastructure - MVC frameworks, especially, make up 
> the scaffolding (hence the name 'Struts') of an application. MVC code 
> often touches user input before developer's see it and gets the last 
> chance to encode output before a channel (user or otherwise) receives 
> it. Focusing on an application's scaffolding allows in some cases, a 
> best-chance of touching all input/out and true invisibility relative 
> to developer generated code. Often, its configuration is declarative 
> in nature as well--keeping security from cluttering up the Java code. 
> 

Re: [SC-L] "Checklist Manifesto" applicability to software security

2010-01-07 Thread Andy Steingruebl
On Thu, Jan 7, 2010 at 7:11 AM, Jeremy Epstein
 wrote:
> Greetings,
>
> So as I was listening, I was thinking that many of the same things
> could be said about software developers and problems with software
> security - every piece of software is unique, any non-trivial piece of
> software is amazingly complex, developers tend to consider themselves
> as artists creating unique works, etc.
>
> Has anyone looked into the parallelisms before?  If so, I'd be
> interested in chatting (probably offlist) about your thoughts.

I've had exceptionally good luck/results from checklists during the
development process, though nothing I could scientifically quantify.

That said, I wonder whether any of the academics on the list would be
willing to actually do a study.  Do some actual trials on defect rates
in things like student assignments when they have some students go
through a checklist to examine their code, and others not.  Might be
interesting to see exactly what types of checklist items really result
in a reduction in bugs...

-- 
Andy Steingruebl
stein...@gmail.com

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-07 Thread John Steven
Jim,

Yours was the predicted response. The ref-impl. to API side-step does not fix 
the flaw in the argument though.

No, you do not need "A" ESAPI to build secure apps. 

Please re-read my email carefully. 

Alternatives:
1) Some organizations adopt OWASP ESAPI's ref-impl.
2) Others build their own do agree and see the value; yes

#1 and #2 agree with your position.

3) Some secure their toolkits (again, "a la secure struts")

Indicating such a "secure struts" is an organization's ESAPI perverts the ESAPI 
concept far too greatly to pass muster. Indeed, were it to, it would violate 
properties 3 and 4 (and very likely 2) within my previous email's advantage 
list. 

Mr. Boberski, you too need to re-read my email. I advise you strongly not to 
keep saying that ESAPI is "like PK-enabling" an APP. I don't think many people 
got a good feeling about how much they spent on, or how effective their PKI 
implementation was ;-). Please consider how you'd ESAPI-enable the millions of 
lines of underlying framework code beneath the app.

4) Policy + Standards, buttressed with a robust assurance program

Some organizations have sufficiently different threat models and deployment 
scenarios within their 'four walls' that they opt for specifying an overarching 
policy and checking each sub-organization's compliance--commensurate with their 
risk tolerance and each app deployment's threat model. Each sub-organization 
may-or-may-not choose to leverage items one and two from this list. I doubt, 
however, you'd argue that more formal methods of verification don't suffice to 
perform 'as well' as ESAPI in securing an app (BTW, I have seen commercial 
implementations opt for such verification as an alternative to a security 
toolkit approach). Indeed, an single security API would likely prove a 
disservice if crammed down the throats of sub-organizations that differ too 
greatly.

At best, the implicit "ESAPI or the highway" campaign slogan  applies to only 
50% of the alternatives I've listed. And since the ESAPI project doesn't have 
documented and publicly available good, specific, actionable requirements, 
mis-use cases, or a threat model from which it's working, the OWASP ESAPI 
project doesn't do as much as it could for the #2 option above.

Jim, Mike, I see your posts all-througout the the blog-o-sphere and mailing 
lists. Two-line posts demanding people adopt ESAPI or forgo all hope can 
off-put. It conjures close-minded religion to me. Rather:

* Consider all four of the options above, one might be better than OWASP ESAPI 
within the context of the post
* Consider my paragraph following Point #4. Create:

* An ESAPI mis-use case guide, back out security policy it manifests, 
  or requirements it implements (and don't point me to the unit 
  tests--I've read them)
* Document an ESAPI threat model (For which apps will developers have
  their expectations met adopting ESAPI? Which won't?)
* A document describing experiment results: before and after ESAPI: 
  how many results does a pen-test find?, 'Code review?
* Write an adoption guide. Apps are only created in a green-field
  once. Then they live in maintenance forever. How do you apply 
  ESAPI to a real-world app already in production without 
risk/regression?

* Generate an argument as to why ESAPI beats these alternatives. Is it cost? 
Speed-to-market? What?
* Finally, realize that it's OK that there's more than one way to do things. 
Revel in it. It's what makes software an exciting field. 

In the meantime, rest assured that those of us out there that have looked get 
that ESAPI can be a good thing.


John Steven
Senior Director; Advanced Technology Consulting
Desk: 703.404.9293 x1204 Cell: 703.727.4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven
http://www.cigital.com
Software Confidence. Achieved.

On Jan 7, 2010, at 10:56 AM, Jim Manico wrote:

> John,
> 
> You do not need OWASP ESAPI to secure an app. But you need "A" ESAPI  
> for your organization in order to build secure Apps, in my opinion.  
> OWASP ESAPI may help you get started down that path.
> 
> An ESAPI is no silver bullet, there is no such thing as that in  
> AppSec. But it will help you build secure apps.
> 
> Jim Manico
> 
> On Jan 6, 2010, at 6:20 PM, John Steven  wrote:
> 
>> All,
>> 
>> With due respect to those who work on ESAPI, Jim included, ESAPI is  
>> not the only way "to make a secure app even remotely possible." And  
>> I believe that underneath their own pride in what they've done--some  
>> of which is very warranted--they understand that. It's hard not to  
>> become impassioned in posting.
>> 
>> I've seen plenty of good secure implementations within  
>> organizations' own security toolkits. I'm not the only one that's  
>> noticed: the BSIMM SSF calls out three relevant activi

Re: [SC-L] 2010 bug hits millions of Germans | World news | The Guardian

2010-01-07 Thread Stephen Craig Evans
Hi Ken,

Looks like there's another one:

Symantec Y2K10 Date Stamp Bug Hits Endpoint Protection Manager
http://www.eweek.com/c/a/Security/Symantec-Y2K10-Date-Stamp-Bug-Hits-Endpoint-Protection-Manager-472518/?kc=EWKNLSTE01072010STR1

I am VERY curious to learn how these happened... Only using the last
digit of the year? Hard for me to believe. Maybe it's in a single API
and somebody tried to be too clever with some bit-shifting.

Stephen

-- 
http://www.linkedin.com/in/stephencraigevans

On Thu, Jan 7, 2010 at 8:45 AM, Kenneth Van Wyk  wrote:
> FYI, below is a link to an article with some additional impact details of the 
> "2010 bug" that's been cropping up in various places.  Still no light being 
> shed on the actual programming error, though.  I think it would make a 
> fascinating case study, or at least discussion, here.
>
> http://www.guardian.co.uk/world/2010/jan/06/2010-bug-millions-germans
>
>
> Cheers,
>
> Ken
>
> -
> Kenneth R. van Wyk
> KRvW Associates, LLC
> http://www.KRvW.com
>
> (This email is digitally signed with a free x.509 certificate from CAcert. If 
> you're unable to verify the signature, try getting their root CA certificate 
> at http://www.cacert.org -- for free.)
>
>
>
>
>
>
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___
>
>

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-07 Thread Jim Manico

John,

You do not need OWASP ESAPI to secure an app. But you need "A" ESAPI  
for your organization in order to build secure Apps, in my opinion.  
OWASP ESAPI may help you get started down that path.


An ESAPI is no silver bullet, there is no such thing as that in  
AppSec. But it will help you build secure apps.


Jim Manico

On Jan 6, 2010, at 6:20 PM, John Steven  wrote:


All,

With due respect to those who work on ESAPI, Jim included, ESAPI is  
not the only way "to make a secure app even remotely possible." And  
I believe that underneath their own pride in what they've done--some  
of which is very warranted--they understand that. It's hard not to  
become impassioned in posting.


I've seen plenty of good secure implementations within  
organizations' own security toolkits. I'm not the only one that's  
noticed: the BSIMM SSF calls out three relevant activities to this  
end:


SDF 1.1 Build/publish security features (*1)
SDF 2.1 Find/publish mature design patterns from the organization  
(similar URL)
SDF 2.3  Build secure-by-design middleware frameworks/common  
libraries (similar URL)


Calling out three activities within the SSF means that it can't just  
be "John Steven's top client" (whatever that means) that's doing  
this either. I've formally reviewed some of these toolkits and I'd  
pit them against ESAPI and expect favorable results. Plenty of  
organizations are doing a great job building stuff on top of  
profoundly broken platforms, frameworks, and toolkits... and they're  
following a 'secure SDL' to get there. ESAPI can not be said to have  
adhered to that rigor (*2). Organizations care about this risk  
regardless of the pedigree and experience of those who are building  
it.


Is the right answer for everyone to drop everything and build their  
own secure toolkit? I don't think so. I like that the OWASP  
community is taking a whack at something open and free to share.  
These same people have attempted to improve Java's security through  
the community process--and though often correct, diligent, friendly,  
and well-intentioned, their patience has often been tested to or  
beyond the breaking point: those building the platforms and  
frameworks simply aren't listening that well yet. That is very sad.


One thing I've seen a lot of is organizations assessing, testing,  
hardening, documenting, and internally distributing their own  
versions of popular Java EE toolkits (the "secure struts"  
phenomenon). I've seen some organizations give their developers  
training and write SAST rules to automatically verify correct use of  
such toolkits. I like this idea a hell of a lot as an alternative to  
an ESAPI-like approach. Why? A few reasons:


1) Popularity - these toolkits appeal to developers: their  
interfaces have been "voted on" by their adopting user population-- 
not conceived and lamented principally by security folk. No one  
forces developers to go from Struts to Spring they do it because it  
saves them time, makes their app faster, or some combination of  
important factors.


2) Changes App Infrastructure - MVC frameworks, especially, make up  
the scaffolding (hence the name 'Struts') of an application. MVC  
code often touches user input before developer's see it and gets the  
last chance to encode output before a channel (user or otherwise)  
receives it. Focusing on an application's scaffolding allows in some  
cases, a best-chance of touching all input/out and true invisibility  
relative to developer generated code. Often, its configuration is  
declarative in nature as well--keeping security from cluttering up  
the Java code. Note that this approach is fundamentally different  
from Firewalls and some dynamic patching because it's "in the  
app" (an argument made recently by others in the blogosphere).


3) Top-to-Bottom Secure by Default - Declarative secure-by-default  
configuration of the hardened toolkit allows for securing those data  
flows that never make it out of the scaffolding into the app. If an  
organization wrote their own toolkit-unware security API, they'd  
have to not only assure their developers call it each-and-every  
place their it's needed but they'd also need to crack open the  
toolkits and make sure THEY call it too. Most of the time, one  
actively wants to avoid even having this visibility let along  
maintenance problem: it's a major headache.


and, most importantly,

4) Less Integration points - Developers are already going to have to  
integrate against a MVC framework, so why force them to integrate  
against YA (yet-another) API? The MVC frameworks already contend  
with things like session management, input filtering, output- 
encoding, and authentication. Why not augment/improve that existing  
idiom rather than force developers to use it and an external  
security API?


The ESAPI team has plenty of responses to the last question... not  
the least of which is "...'cause [framework XXX] sucks." Fair. Out  
of the box

Re: [SC-L] "Checklist Manifesto" applicability to software security

2010-01-07 Thread Benjamin Tomhave
I think there's lots of applicability. People - especially techies - cut
corners. The pressure is usually to get things done in a certain amount
of time, and then add on that people like to generally expend as little
energy as possible, and viola! you see the problem.

Of course, the flip side is that checklists in an area like IT can be
detrimental, too. PCI is a great example, where it never made a claim of
being comprehensive, yet is treated as such (and codified in State laws
for crying out loud), and then orgs still get hacked, leaving them to
wonder why the checklist didn't protect them.

Perhaps the key, then, is knowing that you need experience+procedures.
Procedures allow you to not screw up the mundane and routine, while
experience allows you to dynamically respond to issues that don't fit
the precise steps of the procedure. Part and parcel to this, then, is
needing to empower experienced professionals to be flexible and dynamic
in the vast of challenges rather than requiring them to rigidly adhere
to procedure in all instances.

Within appsec, QA and related security testing is probably a great
example. If all QA could be strictly proceduralized, then you could just
automate it all. However, testing doesn't always go as expected,
requiring a functioning brain to (hopefully) respond and adapt
accordingly. You probably need procedures for properly catching those
exceptions, but nonetheless, those procedures automatically create a
capacity for dynamic response.

Sorry, a bit rambly...

-ben

Jeremy Epstein wrote:
> Greetings,
> 
> I was listening yesterday to an interview [1] on NPR with Dr. Atul
> Gawande, author of "Checklist Manifesto" [2].  He describes the
> problem that medical procedures (e.g., surgery) tend to have lots of
> mistakes, mostly caused because of leaving out important steps.  He
> claims that 2/3 of medical - or maybe surgical - errors can be avoided
> by use of checklists.  Checklists aren't very popular among doctors,
> because they don't like to see themselves as factory workers following
> a procedure, because the human body is extremely complex, and because
> every patient is unique.
> 
> So as I was listening, I was thinking that many of the same things
> could be said about software developers and problems with software
> security - every piece of software is unique, any non-trivial piece of
> software is amazingly complex, developers tend to consider themselves
> as artists creating unique works, etc.
> 
> Has anyone looked into the parallelisms before?  If so, I'd be
> interested in chatting (probably offlist) about your thoughts.
> 
> --Jeremy
> 
> [1] Listen to the interview at http://wamu.org/programs/dr/10/01/06.php#29280
> [2] "The Checklist Manifesto: How to Get Things Right", Atul Gawande,
> Metropolitan Books.
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___
> 
> 

-- 
Benjamin Tomhave, MS, CISSP
tomh...@secureconsulting.net
Blog: http://www.secureconsulting.net/
Twitter: http://twitter.com/falconsview
LI: http://www.linkedin.com/in/btomhave

[ Random Quote: ]
Pareto Principle (a.k.a. “The 80-20 Rule”): "For many phenomena, 80% of
consequences stem from 20% of the causes."
http://globalnerdy.com/2007/07/18/laws-of-software-development/
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] "Checklist Manifesto" applicability to software security

2010-01-07 Thread Brian Chess
I think it's a great analogy.  If you'd like to read more without ordering
the book, here's an article Gawande wrote for the New Yorker in 2007:

http://www.newyorker.com/reporting/2007/12/10/071210fa_fact_gawande

Brian

On 1/7/10 7:11 AM, "Jeremy Epstein"  wrote:

> Greetings,
> 
> I was listening yesterday to an interview [1] on NPR with Dr. Atul
> Gawande, author of "Checklist Manifesto" [2].  He describes the
> problem that medical procedures (e.g., surgery) tend to have lots of
> mistakes, mostly caused because of leaving out important steps.  He
> claims that 2/3 of medical - or maybe surgical - errors can be avoided
> by use of checklists.  Checklists aren't very popular among doctors,
> because they don't like to see themselves as factory workers following
> a procedure, because the human body is extremely complex, and because
> every patient is unique.
> 
> So as I was listening, I was thinking that many of the same things
> could be said about software developers and problems with software
> security - every piece of software is unique, any non-trivial piece of
> software is amazingly complex, developers tend to consider themselves
> as artists creating unique works, etc.
> 
> Has anyone looked into the parallelisms before?  If so, I'd be
> interested in chatting (probably offlist) about your thoughts.
> 
> --Jeremy
> 
> [1] Listen to the interview at http://wamu.org/programs/dr/10/01/06.php#29280
> [2] "The Checklist Manifesto: How to Get Things Right", Atul Gawande,
> Metropolitan Books.
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] "Checklist Manifesto" applicability to software security

2010-01-07 Thread Jeremy Epstein
Greetings,

I was listening yesterday to an interview [1] on NPR with Dr. Atul
Gawande, author of "Checklist Manifesto" [2].  He describes the
problem that medical procedures (e.g., surgery) tend to have lots of
mistakes, mostly caused because of leaving out important steps.  He
claims that 2/3 of medical - or maybe surgical - errors can be avoided
by use of checklists.  Checklists aren't very popular among doctors,
because they don't like to see themselves as factory workers following
a procedure, because the human body is extremely complex, and because
every patient is unique.

So as I was listening, I was thinking that many of the same things
could be said about software developers and problems with software
security - every piece of software is unique, any non-trivial piece of
software is amazingly complex, developers tend to consider themselves
as artists creating unique works, etc.

Has anyone looked into the parallelisms before?  If so, I'd be
interested in chatting (probably offlist) about your thoughts.

--Jeremy

[1] Listen to the interview at http://wamu.org/programs/dr/10/01/06.php#29280
[2] "The Checklist Manifesto: How to Get Things Right", Atul Gawande,
Metropolitan Books.
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] 2010 bug hits millions of Germans | World news | The Guardian

2010-01-07 Thread Kenneth Van Wyk
FYI, below is a link to an article with some additional impact details of the 
"2010 bug" that's been cropping up in various places.  Still no light being 
shed on the actual programming error, though.  I think it would make a 
fascinating case study, or at least discussion, here.

http://www.guardian.co.uk/world/2010/jan/06/2010-bug-millions-germans 


Cheers,

Ken

-
Kenneth R. van Wyk
KRvW Associates, LLC
http://www.KRvW.com

(This email is digitally signed with a free x.509 certificate from CAcert. If 
you're unable to verify the signature, try getting their root CA certificate at 
http://www.cacert.org -- for free.)







smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Ramesh Nagappan Blog : Java EE 6: Web Application Security made simple ! | Core Security Patterns Weblog

2010-01-07 Thread John Steven
All,

With due respect to those who work on ESAPI, Jim included, ESAPI is not the 
only way "to make a secure app even remotely possible." And I believe that 
underneath their own pride in what they've done--some of which is very 
warranted--they understand that. It's hard not to become impassioned in posting.

I've seen plenty of good secure implementations within organizations' own 
security toolkits. I'm not the only one that's noticed: the BSIMM SSF calls out 
three relevant activities to this end:

SDF 1.1 Build/publish security features (*1)
SDF 2.1 Find/publish mature design patterns from the organization (similar URL)
SDF 2.3  Build secure-by-design middleware frameworks/common libraries (similar 
URL)

Calling out three activities within the SSF means that it can't just be "John 
Steven's top client" (whatever that means) that's doing this either. I've 
formally reviewed some of these toolkits and I'd pit them against ESAPI and 
expect favorable results. Plenty of organizations are doing a great job 
building stuff on top of profoundly broken platforms, frameworks, and 
toolkits... and they're following a 'secure SDL' to get there. ESAPI can not be 
said to have adhered to that rigor (*2). Organizations care about this risk 
regardless of the pedigree and experience of those who are building it.

Is the right answer for everyone to drop everything and build their own secure 
toolkit? I don't think so. I like that the OWASP community is taking a whack at 
something open and free to share. These same people have attempted to improve 
Java's security through the community process--and though often correct, 
diligent, friendly, and well-intentioned, their patience has often been tested 
to or beyond the breaking point: those building the platforms and frameworks 
simply aren't listening that well yet. That is very sad.

One thing I've seen a lot of is organizations assessing, testing, hardening, 
documenting, and internally distributing their own versions of popular Java EE 
toolkits (the "secure struts" phenomenon). I've seen some organizations give 
their developers training and write SAST rules to automatically verify correct 
use of such toolkits. I like this idea a hell of a lot as an alternative to an 
ESAPI-like approach. Why? A few reasons: 

1) Popularity - these toolkits appeal to developers: their interfaces have been 
"voted on" by their adopting user population--not conceived and lamented 
principally by security folk. No one forces developers to go from Struts to 
Spring they do it because it saves them time, makes their app faster, or some 
combination of important factors.

2) Changes App Infrastructure - MVC frameworks, especially, make up the 
scaffolding (hence the name 'Struts') of an application. MVC code often touches 
user input before developer's see it and gets the last chance to encode output 
before a channel (user or otherwise) receives it. Focusing on an application's 
scaffolding allows in some cases, a best-chance of touching all input/out and 
true invisibility relative to developer generated code. Often, its 
configuration is declarative in nature as well--keeping security from 
cluttering up the Java code. Note that this approach is fundamentally different 
from Firewalls and some dynamic patching because it's "in the app" (an argument 
made recently by others in the blogosphere).  

3) Top-to-Bottom Secure by Default - Declarative secure-by-default 
configuration of the hardened toolkit allows for securing those data flows that 
never make it out of the scaffolding into the app. If an organization wrote 
their own toolkit-unware security API, they'd have to not only assure their 
developers call it each-and-every place their it's needed but they'd also need 
to crack open the toolkits and make sure THEY call it too. Most of the time, 
one actively wants to avoid even having this visibility let along maintenance 
problem: it's a major headache.   

and, most importantly,

4) Less Integration points - Developers are already going to have to integrate 
against a MVC framework, so why force them to integrate against YA 
(yet-another) API? The MVC frameworks already contend with things like session 
management, input filtering, output-encoding, and authentication. Why not 
augment/improve that existing idiom rather than force developers to use it and 
an external security API?

The ESAPI team has plenty of responses to the last question... not the least of 
which is "...'cause [framework XXX] sucks." Fair. Out of the box, they often 
do. Fair, [framework team XXX] probably isn't listening to us security guys 
either. 

If you're an ESAPI shop--good. Careful adoption of a security API can help your 
security posture. Please remember to validate that the API (if you sucked in an 
external one rather than writing it) applies to your applications' threat model 
and ticks off all the elements of your security policy. Because, having hooked 
it into their apps, teams are going