Re: [SC-L] Education and security -- another perspective (was ACM Queue - Content)

2004-07-08 Thread Dana Epp
What is wrong with this picture ?
I see both of you willing to mandate the teaching of C and yet not
mandate the teaching of any of Ada, Pascal, PL/I etc.
This seems like the teaching of making do.
Hmmm, interesting point. In a particular set of learning objectives 
required to complete a credential (ie: CompSci, CIS etc) what do you 
recommend we sacrifice to put in all this teaching?

I don't pick C for C's sake. I choose C because ON AVERAGE, most 
students will be exposed to C more than the languages you suggest. 
Especially in the majority on industries hiring students out of university.

However, that said, I don't think the language matters past exposure to 
the industry. A strong foundation of programming skills should be 
language agnostic; loops are loops, recursion is recursion, conditions 
are conditions etc. Learning the syntax of the language to accomplish it 
is secondary. Knowing how a loop breaks down into machine instructions 
is the goal here. Not how to do it in Ada.

Think about it in reflection of a linguist doing translation at the 
United Nations. They didn't simply go and learn every particular 
language. They are trained in understanding the mechanisms of human 
speech and formal grammar, and they then apply it to the language they 
are learning. In other words, they work from their foundation of 
learning in grammar and then apply the syntax of the particular language 
they are translating. It makes learning new languages much easier, and 
much faster.

So too should be programming. If a student has a strong foundation of 
learning when it comes to programming, they can adapt to different 
computer languages that they are exposed to as it comes to them. C is a 
perfect language to use to quickly get those concepts across in a 
practical environment in universities. And more importantly, from a 
secure coding objective, you can show what NOT to do.

Dana Epp

Re: [SC-L] Programming languages used for security

2004-07-10 Thread Dana Epp
My what a week of interesting discussions. Lets end this week on a good 
and light hearted note.

Admit it. We all know the most secure programming language is Logo anyways.
HLNIt's hip to be 'rep 4 [ fwd 50 rt 90]'/HLN
Laugh. Or the world laughs at you. Have a good weekend guys.
Crispin Cowan wrote:
David Crocker wrote:
1. Is it appropriate to look for a single general purpose programming
language? Consider the following application areas:
a) Application packages
b) Operating systems, device drivers, network protocol stacks etc.
c) Real-time embedded software
The features you need for these applications are not the same. For 
garbage collection is very helpful for (a) but is not acceptable in 
(b) and (c).
For (b) you may need to use some low-level tricks which you will not 
need for
(a) and probably not for (c).

I agree completely that one language does not fit all. But that does not 
completely obviate the question, just requires some scoping.

2. Do we need programming languages at all? Why not write precise 
specifications and have the system generate the program, thereby 
saving time and
eliminating coding error? [This is not yet feasible for operating 
systems, but
it is feasible for many applications, including many classes of embedded

The above is the art of programming language design. Programs written in 
high-level languages are *precisely* specifications that result in the 
system generating the program, thereby saving time and eliminating 
coding error. You will find exactly those arguments in the preface to 
the KR C book.


Dana Epp

Re: [SC-L] How do we improve s/w developer awareness?

2004-11-12 Thread Dana Epp
I think we have to go one step further.
Its nice to know what the attack patterns are. A better thing to do is to know how to identify them 
during threat modeling, and then apply safeguards to mitigate the risk. ie: We need a merge of 
thoughts from Exploiting Software and Building Secure Software into a 
single source... where attack and defense can be spoken about together.
We all like to spout out that until you know the threats to which you are 
susceptible to, you cannot build secure systems. The reality is, unless you 
know how to MITIGATE the threats... simply knowing they exist doesn't do much 
to protect the customer.
Gary McGraw wrote:
One of the reasons that Greg Hoglund and I wrote Exploiting Software was
to gain a basic underdstanding of what we call attack patterns.  The
idea is to abstract away from platform and language considerations (at
least some), and thus elevate the level of attack discussion.
We identify and discuss 48 attack patterns in Exploiting Software.  Each
of them has a handful of associated examples from real exploits.  I will
paste in the complete list below.  As you will see, we provided a start,
but there is plenty of work here remaining to be done.
Perhaps by talking about patterns of attack we can improve the signal to
noise ratio in the exploit discussion department.
Gary McGraw, Ph.D.
CTO, Cigital
Make the Client Invisible
Target Programs That Write to Privileged OS Resources 
Use a User-Supplied Configuration File to Run Commands That Elevate
Make Use of Configuration File Search Paths 
Direct Access to Executable Files 
Embedding Scripts within Scripts 
Leverage Executable Code in Nonexecutable Files 
Argument Injection 
Command Delimiters 
Multiple Parsers and Double Escapes 
User-Supplied Variable Passed to File System Calls 
Postfix NULL Terminator 
Postfix, Null Terminate, and Backslash 
Relative Path Traversal 
Client-Controlled Environment Variables 
User-Supplied Global Variables (DEBUG=1, PHP Globals, and So Forth) 
Session ID, Resource ID, and Blind Trust
Analog In-Band Switching Signals (aka Blue Boxing) 
Attack Pattern Fragment: Manipulating Terminal Devices 
Simple Script Injection 
Embedding Script in Nonscript Elements 
XSS in HTTP Headers 
HTTP Query Strings 
User-Controlled Filename 
Passing Local Filenames to Functions That Expect a URL 
Meta-characters in E-mail Header
File System Function Injection, Content Based
Client-side Injection, Buffer Overflow
Cause Web Server Misclassification
Alternate Encoding the Leading Ghost Characters
Using Slashes in Alternate Encoding
Using Escaped Slashes in Alternate Encoding 
Unicode Encoding 
UTF-8 Encoding 
URL Encoding 
Alternative IP Addresses 
Slashes and URL Encoding Combined 
Web Logs 
Overflow Binary Resource File 
Overflow Variables and Tags 
Overflow Symbolic Links 
MIME Conversion 
HTTP Cookies 
Filter Failure through Buffer Overflow 
Buffer Overflow with Environment Variables 
Buffer Overflow in an API Call 
Buffer Overflow in Local Command-Line Utilities 
Parameter Expansion 
String Format Overflow in syslog() 

This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.

Dana Epp

[no subject]

2004-12-02 Thread Dana Epp
Subject: Re: [SC-L] How do we improve s/w developer awareness?
Date: Thu, 2 Dec 2004 12:52:35 -0800
Precedence: bulk
Mailing-List: contact [EMAIL PROTECTED] ; run by MajorDomo
List-Id: Secure Coding Mailing List
List-Post: mailto:[EMAIL PROTECTED]
Delivered-To: mailing list [EMAIL PROTECTED]
Delivered-To: moderator for [EMAIL PROTECTED]

I think we also have to realize that bridge building has had centuries of 
time to evolve, and learn from its mistakes. Secure software engineering as 
a discipline is still in its infancy. I would love to see the quality of 
bridges in its first 50 years of development.

That's of course no excuse for the current state of software development. 
But comparisons like this are like statistics... 86.12345% of them are made 
up, or have no sane correlation.

- Original Message - 
Sent: Thursday, December 02, 2004 8:25 AM
Subject: Re: [SC-L] How do we improve s/w developer awareness? 

I have to say I find your comparison between bridge engineers and software
 engineers rather troubling.

 In response to your question:

  'Would you accept it was too hard to do a stress analysis from the
 engineer designing a bridge?'

 I think, regrettably, we probably would do these days.

 Remember that little incident in 2000 when the London millennium bridge 
 closed immediately after opening due to excessive wobbling when people
 walked across it? I can't guarantee that my recollection is accurate, but
 I'm sure they were trying to put this down to that software classic, a
 'Design feature'.

 Seems that far from Software Engineers taking the bridge engineers
 approach, we may be seeing the exact reverse happening. :-)

 Graham Coles.

RE: [SC-L] Bugs and flaws

2006-02-03 Thread Dana Epp
Title: Re: [SC-L] Bugs and flaws

I think I would word that 
differently. The design defect was when Microsoft decided to allow meta data to 
call GDI functions. 

Around 1990 when this was 
introduced the threat profile was entirely different; the operating system could 
trust the metadata. Well, actually I would argue that they couldn't, but no one 
knew any better yet. At the time SetAbortProc() was an important function to 
allow for print cancellation in the co-operative multitasking environment that 
was Windows 3.0.

To be clear, IE was NOT 
DIRECTLY vulnerable to the WMF attack vector everyone likes to use as a test 
case for this discussion. IE actually refuses to process any type of metadata 
that supported META_ESCAPE records (which SetAbortProc relies on). Hence why its 
not possible to exploit the vulnerability by simply calling a WMF image via 
HTML. So how is IE vulnerable then? It's not actually. The attack vector uses IE 
as a conduit to actually call out to secondary library code that will process 
it. In the case of the exploits that hit the Net, attackers used an IFRAME hack 
to call out to the shell to process it. The shell would look up the handler for 
WMF, which was the Windows Picture Viewer that did the processing in 
shimgvw.dll. When the dll processed the WMF, it would convert it to a printable 
EMF format, and bam... we ran into problems.

With the design defect being 
the fact metadata can call arbitrary GDI code, the implementation flaw is the 
fact applications like IE rely so heavily on calling out to secondary libraries 
that just can't be trusted. Even if IE has had a strong code review, it is 
extremely probable that most of the secondary library code has not had the same 
audit scrutiny. This is a weakness to all applications, not just IE. When you 
call out to untrusted code that you don't control, you put theapplication 
at risk. No different than any other operating system. Only problem is Windows 
is riddled with these potential holes because its sharing so much of the same 
codebase. And in the past the teams rarely talk to each other to figure this 

Code reuse is one thing, but 
some of the components in Windows are carry over from 15 years ago, and will 
continue to put us at risk due to the implementation flaws that haven't yet been 
found. But with such a huge master sources to begin with, its not something that 
will be fixed over night.

Dana Epp[Microsoft Security 

From: [EMAIL PROTECTED] on behalf of 
Crispin CowanSent: Fri 2/3/2006 12:12 PMTo: Gary 
McGrawCc: Kenneth R. van Wyk; Secure Coding Mailing 
ListSubject: Re: [SC-L] Bugs and flaws

Gary McGraw wrote: To cycle this all back around to the 
original posting, lets talk about the WMF flaw in particular. Do 
we believe that the best way for Microsoft to find similar design 
problems is to do code review? Or should they use a higher level 
approach? Were they correct in saying (officially) that flaws 
such as WMF are hard to anticipate?I have heard 
some very insightful security researchers from Microsoftpushing an abstract 
notion of "attack surface", which is the amount ofcode/data/API/whatever 
that is exposed to the attacker. To design forsecurity, among other things, 
reduce your attack surface.The WMF design defect seems to be that IE has 
too large of an attacksurface. There are way too many ways for 
unauthenticated remote webservers to induce the client to run way too much 
code with parametersprovided by the attacker. The implementation flaw is 
that the WMF API inparticular is vulnerable to malicious 
content.None of which strikes me as surprising, but maybe that's just me 
:)Crispin--Crispin Cowan, 
of Software Engineering, Novell 
Olympic Games: The Bi-Annual Festival of 
Coding mailing list (SC-L)SC-L@securecoding.orgList information, 
subscriptions, etc - 
charter available at -

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

RE: [SC-L] ddj: beyond the badnessometer

2006-07-13 Thread Dana Epp
Although pentesting isn't perfect, I think in the right scope it has the
potential of acting in a vital role in the development lifecycle of a

Building known attack patterns into a library which can be run against a
codebase has some merrit, as long as you understand what the resulting
expectations will be. As an example, I would consider automated
vulnerability assessment tools built to do input validation fuzzing to
be part of pentesting. And most pentesters out there use said tools for
that very reason.

I agree that pentesting at the START of a project is futile. Evaluating
the risks to the software by understanding its architecture, inputs and
flows goes much deeper and allows you to better find design flaws
instead of implementation bugs. But I am not so sure I would dismiss the
act of pentesting because of the badness-ometer factor. If we did, we
would also be dismissing things like static code analysis tools as they
may show similar results to many out there.

On a different tangent to this thread though, I don't think that the
BEST use of pentesting is for determining how secure your code is in the
first place. It is much more suited to allow you to stress test failure
code paths for different implementation configurations. No matter how
safe you write your code, pentesting can ferret out different scenarios
that come from deployment configuration problems. That is, if the
pentest tools and the user(s) of said tools know how to run through this
properly. As you mentioned in your article, too many people pass
themselves off as pentesting experts when they aren't. Just because they
CAN run Nessus doesn't mean they are good pentesters.

Its about using the right tool for the right job. As pentest tools
mature I think we will be able to use the growing attack libraries to
test against known patterns to eliminate the brain dead security bugs,
while allowing the tools to go deeper and ferret out problems as it
reaches more code coverage. The more we can automate that process and
make it part of our daily tests... the quicker it can expose 'known
problems' in a code base.

Can anyone recommend a tool that CAN tell you how good your security is?
I thought not.

Here I go quoting Spaf again.

When the code is incorrect, you can't really talk about security. When
the code is faulty, it cannot be safe.

Problem is, no tool in the world is going to show green blinky lights to
tell you the code is safe. Human heuristics come into play here, and we
have to leverage what assets we have, both manual and automatic, to find
the faulty code and eliminate it. And pentesting is just another one of
those tools in the arsenal to help.

Dana Epp 
[Microsoft Security MVP]

-Original Message-
[mailto:[EMAIL PROTECTED] On Behalf Of Gary McGraw
Sent: Thursday, July 13, 2006 8:05 AM
To: Nash
Cc: Secure Coding Mailing List
Subject: RE: [SC-L] ddj: beyond the badnessometer

Excellent post nash.  Thanks!

I agree with you for the most part.  You have a view of pen testing that
is quite sophisticated (especially compared to the usual drivel).  I
agree with you so much that I included pen testing as the third most
important touchpoint in my new book Software Security
It is the subject of chapter 6.  All the code review and architectural
risk analysis in the world can still be completely sidestepped by poor
decisions regarding the fielded software.  Pen testing is ideal for
looking into that.

But there are two things I want to reiterate:
1) pen testing is a bad way to *start* working on software'll get much better traction with code review and
architectural risk assessment.  {Of course, what nash says about the
power of a live sploit is true, and that kind of momentum creation may
be called for in a completely new situation where biz execs need basic
2) pen testing can't tell you anything about how good your security is,
only how bad it is.
3) never use the results of a pen test as a punch list to attain



This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is
intended solely for the recipient and use by any other party is not
authorized.  If you are not the intended recipient (or otherwise
authorized to receive this message by the intended recipient), any
disclosure, copying, distribution or use of the contents of the
information is prohibited.  If you have received this electronic message
transmission in error, please contact the sender by reply email and
delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly
from the use of this email or its contents.
Thank You

Re: [SC-L] bumper sticker slogan for secure software

2006-07-18 Thread Dana Epp
Or perhaps less arrogance in believing it won't sink.

Absolute security is a myth. As is designing absolutely secure software.
It is a lofty goal, but one of an absolute that just isn't achievable as
threats change and new attack patterns are found. Designing secure
software is about attaining a level of balance around software
dependability weighed against mitigated risks against said software to
acceptable tolerance levels, while at the same time ensuring said
software accomplishes the original goal... to solve some problem for the

On my office door is a bumper sticker I made. It simply says:


10 points to the first person to explain what that means. 

Dana Epp 
[Microsoft Security MVP]

-Original Message-
[mailto:[EMAIL PROTECTED] On Behalf Of SC-L Subscriber Dave
Sent: Tuesday, July 18, 2006 7:53 AM
Subject: [SC-L] bumper sticker slogan for secure software

Paolo Perego [mailto:[EMAIL PROTECTED] writes:

  Software is like Titanic, pleople claim it was unsinkable. Securing
is   providing it power steering

But power steering wouldn't have saved it.  By the time the iceberg was
spotted, there was not enough time to turn that large a boat.  Perhaps
radar, but that doesn't make a very good analogy.  Maybe a thicker
tougher hull and automatic compartment doors?


Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] bumper sticker slogan for secure software

2006-07-20 Thread Dana Epp
 but none of this changes the fact that it IS possible to write
completely secure code.
 -- mic

And it IS possible that a man will walk on Mars someday. But its not
practical or realistic in the society we live in today. I'm sorry mic,
but I have to disagree with you here.

It is EXTREMELY difficult to have code be 100% correct if an application
has any level of real use or complexity. There will be security defects.
The weakest link here is the human factor, and people make mistakes.
More importantly, threats are constantly evolving and what you may
consider completely secure today may not be tomorrow when a new attack
vector is recognized that may attack your software. And unless you wrote
every single line of code yourself without calling out to ANY libraries,
you cannot rely on the security of other libraries or components that
may NOT have the same engineering discipline that you may have on your
own code base. 

Ross Anderson once said that secure software engineering is about
building systems to remain dependable in the face of malice, error, or
mischance. I think he has something there. If we build systems to
maintain confidentiality, integrity and availability, we have the
ability to fail gracefully in a manner to recover from unknown or
changing problems in our software without being detrimental to the user,
or their data.

I don't think we should ever stop striving to reach secure coding
nirvana. But I also understand that in the real world we are still in
our infancy when it comes to secure software as a discipline, and we
still have much to learn before we will reach it. 

Dana Epp
[Microsoft Security MVP]

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] bumper sticker slogan for secure software

2006-07-21 Thread Dana Epp
Actually, Brian Shea got the points for emailing me that he knew it was
the system error Access Denied.

An extra 10 points goes to Andrew van der Stock for his explaination

apparently the term originates from radio, where 5x5 means good
reception and good signal strength (in that order). So



- no reception (0)
- good signal strength (5)

ie, we're doing ok at getting our message out, but people aren't  
listening yet. 

That cracked me up. So fitting for this forum.

Dana Epp 
[Microsoft Security MVP]

-Original Message-
From: mikeiscool [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 20, 2006 3:25 PM
To: Wall, Kevin
Cc: Dana Epp;
Subject: Re: [SC-L] bumper sticker slogan for secure software

 BTW, does anyone besides me think that it's time to put this thread to


I do.

But i'm still waiting for my points from dana ...


-- mic

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] Bumper sticker definition of secure software

2006-07-24 Thread Dana Epp
But secure software is not a technology problem, it's a business one.
Focused on people.

If smartcards were so great, why isn't every single computer in the
world equipped with a reader? There will always be technology safeguards
we can put in place to mitigate particular problems. But technology is
not a panacea here. 

There will always be trade-offs that will trump secure design and
deployment of safeguards. It's not about putting ABSOLUTE security in...
It's about putting just enough security in to mitigate risks to
acceptable levels to the business scenario at hand, and at a cost that
is justifiable. Smartcard readers aren't deployed everywhere as they
simply are too costly to deploy, against particular PERCEIVED threats
that may or not be part of an application's threat profile.

I agree that we can significantly lessen the technology integration
problem with computers. We are, after all, supposed to be competent
developers that can leverage the IT infrastructure to our bidding. The
problem is when we keep our head in the technology bubble without
thinking about the business impacts and costs, wasting resources in the
wrong areas.

It is no different than network security professionals that deploy
$30,000 firewalls to protect digital assets worth less than the computer
they are on. (I once saw a huge Checkpoint firewall protecting an MP3
server. Talk about waste.) Those guys should be shot for ever making
that recommendation. As should secure software engineers who think they
can solve all problems with technology without considering all risks and
impacts to the business.

Dana Epp 
[Microsoft Security MVP]

-Original Message-
[mailto:[EMAIL PROTECTED] On Behalf Of mikeiscool
Sent: Sunday, July 23, 2006 3:42 PM
To: Crispin Cowan
Cc: Secure Coding Mailing List
Subject: Re: [SC-L] Bumper sticker definition of secure software

 As a result, really secure systems tend to require lots of user 
 training and are a hassle to use because they require permission all
the time.

No I disagree still. Consider a smart card. Far easier to use then the
silly bank logins that are available these days. Far easier then even
bothering to check if the address bar is yellow, due to FF, or some
other useless addon.

You just plug it in, and away you go, pretty much.

And requiring user permission does not make a system harder to use (per
se). It can be implemented well, and implemented badly.

 Imagine if every door in your house was spring loaded and closed 
 itself after you went through. And locked itself. And you had to use a

 key to open it each time. And each door had a different key. That 
 would be really secure, but it would also not be very convenient.

We're talking computers here. Technology lets you automate things.


-- mic
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] Unclassified NSA document on .NET 2.0 Framework Security

2008-11-26 Thread Dana Epp
With all due respect, I think this is where the process of secure coding
fails. I think it stems from poor education, but its compounded by an
arrogant cop out that developers have no power. Your view is not alone. I
hear it a lot. And I think its an easy out.

I agree with you that buy in for designing secure code must come from the
top down. It has to start at the senior management level and work its way
down the line. However, there are certain day-to-day actions that a
developer completely has control over.

With tight deadlines and lack of resources its easy to sacrifice good
principles and practices to get code out. No one denies that. But there are
certain things developers CAN and SHOULD be doing. They should be thinking
more defensively about their code. If you consider the criticality of many
design flaws of today, a lot of it can be controlled by better understanding
how the data traverses the systems, and more importantly how to address it
as it crosses trust boundaries. By understanding this, its easier to see the
entry points that matter most that an adversary may use, and allows the
developer to decide how to deal with the code as it fails. This has very
little to do with management. This is the difference between strategic and
tactical decisions on project development. A developer doesn't need to ask
his boss if he can or should use exception handling correctly. To validate
all untrusted user input. To check return codes when calling functions.
Sadly... in this day and age... most developers don't even do that
correctly. And thats a simple example of this entire problem.

Until we stop blaming others and take responsibility in the code we write,
things won't change. As Gary mentioned... there is an attitude from many
developers that they can abdocate responsibility in what they are doing. Its
hard to get them to even SAY security. Never mind actually making an effort
to do something about it.

I appreciate your opinion that you need to approach and work with the
decision makes. You are absolutely correct. That's a strategic component of
writing secure code. However the tactical approach to actually DO IT fall on
developers. It shouldn't BE a special process. It should be part of their
day-to-day thinking, attitude and function in their field of work.

Guess where this all starts? By educating the developer. Which is why we
squarely need to reflect on them to tactically do it.

Dana Epp
Microsoft Security MVP

On Tue, Nov 25, 2008 at 9:01 AM, Stephen Craig Evans 


 Developers have no power. You should be talking to the decision makers.

 As an example, to instill the importance of software security, I talk
 to decision makers: project managers, architects, CTOs (admittedly,
 this is a blurred line - lots of folks call themselves architects). If
 I go to talk about software security to developers, I know from
 experience that I am probably wasting my time. Even if they do care,
 they have no effect overall.

 Your target and blame is wrong; that's all that I am saying.


 On Wed, Nov 26, 2008 at 12:48 AM, Gunnar Peterson
  Sorry I didn't realize developers is an offensive ivory tower in other
  parts of the world, in my world its a compliment.
  On Nov 25, 2008, at 10:30 AM, Stephen Craig Evans wrote:
  maybe the problem with least privilege is that it requires that
  IMHO, your US/UK ivory towers don't exist in other parts of the world.
  Developers have no say in what they do. Nor, do they care about
  software security and why should they care?
  So, at least, change your nomenclature and not say developers. It
  offends me because you are putting the onus of knowing about software
  security on the wrong people.
  On Tue, Nov 25, 2008 at 10:18 PM, Gunnar Peterson
  maybe the problem with least privilege is that it requires that
  1. define the entire universe of subjects and objects
  2. define all possible access rights
  3. define all possible relationships
  4. apply all settings
  5. figure out how to keep 1-4 in synch all the time
  do all of this before you start writing code and oh and there are
  basically no tools that smooth the adoption of the above.
  i don't think us software security people are helping anybody out in
  2008 by doing ritual incantations of a paper from the mid 70s that may
  or may not apply to modern computing and anyhow is riddled with ideas
  that have never been implemented in any large scale systems
  compare these two statements
  Statement 1. Saltzer and Schroeder:
  f) Least privilege: Every program and every user of the system should
  operate using the least set of privileges necessary to complete the
  job. Primarily, this principle limits the damage that can result from
  an accident or error. It also reduces the number of potential

Re: [SC-L] any one a CSSLP is it worth it?

2010-04-14 Thread Dana Epp
Not sure that would work either though.

Many secdev people are introverts. In their shell, they won't debate
the validity of a position, including a wrong answer. Zone that into a
response in the exam. It's one thing to say there is no correct
answer, but the way the questions are set at ISC2, its what is the
BEST answer out of this list. By the end of the 6 hours your eyes are
glossed over as you actually had to think. But its still better than
the 1-2 hr absolute answer exams from many orgs.

I think where Gary nailed it on the head is you have to be a good
developer BEFORE you can be a good at secdev. Poorly written code can
not be trusted. It cannot be safe. The rest is moot.

I have never been one to trust a piece of paper. Education comes from
doing. Book knowledge cannot be the only weapon in a secdev's
experience portfolio. He needs war wounds. Real scars of experience.
He needs to learn from his own experience and apply that as the field
matures and grows. I see far too many people who think because they
opened Ken Van Wyk's, Michael Howard's or Gary McGraw's books that
they now get secdev. Without actually applying that knowledge
transfer. Review their code, and its far from absolute. Especially in
failure code paths. Don't get me wrong... its essential reading. But
its not enough. Doing is.

In the immortal words of Yoda... Do or do not. There is no try..

I wonder if a bigger problem is that corps are relying on these
certifications to weed out the bad apples? Does NOT having CSSLP mean
the candidate sucks at secdev? Or the reverse, can anyone who passed
the CSSLP be trusted to get it right all the time? Absolute security
is a fallacy. As is perfect code. With enough money and motive,
anything can be breached. A piece of paper won't stop that. Nor that
crappy piece of code that I didn't properly threat model 15 years ago
that is still in use today.

Dana Epp
Microsoft Security MVP

On Wed, Apr 14, 2010 at 8:24 AM, Wall, Kevin wrote:

 Gary McGraw wrote...

 Way back on May 9, 2007 I wrote my thoughts about
 certifications like these down.  The article, called
 Certifiable was published by darkreading:

 I just reread your Dark Reading post and I must say I agree with it
 almost 100%. The only part where I disagree with it is where you wrote:

        The multiple choice test itself is one of the problems. I
        have discussed the idea of using multiple choice to
        discriminate knowledgeable developers from clueless
        developers (like the SANS test does) with many professors
        of computer science. Not one of them thought it was possible.

 I do think it is possible to separate the clueful from the clueless
 using multiple choice if you cheat. Here's how you do it. You write
 up your question and then list 4 or 5 INCORRECT answers and NO CORRECT

 The clueless ones are the ones who just answer the question with one of
 the possible choices. The clueful ones are the ones who come up and argue
 with you that there is no correct answer listed. ;-)

 Kevin W. Wall           Qwest Information Technology, Inc.    Phone: 614.215.4788
 It is practically impossible to teach good programming to students
  that have had a prior exposure to BASIC: as potential programmers
  they are mentally mutilated beyond hope of regeneration
    - Edsger Dijkstra, How do we tell truths that matter?

 This communication is the property of Qwest and may contain confidential or
 privileged information. Unauthorized use of this communication is strictly
 prohibited and may be unlawful.  If you have received this communication
 in error, please immediately notify the sender by reply e-mail and destroy
 all copies of the communication and any attachments.

 Secure Coding mailing list (SC-L)
 List information, subscriptions, etc -
 List charter available at -
 SC-L is hosted and moderated by KRvW Associates, LLC (
 as a free, non-commercial service to the software security community.
 Follow KRvW Associates on Twitter at:

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: