Re: [SC-L] any one a CSSLP is it worth it?

2010-04-14 Thread Dana Epp
Not sure that would work either though.

Many secdev people are introverts. In their shell, they won't debate
the validity of a position, including a wrong answer. Zone that into a
response in the exam. It's one thing to say "there is no correct
answer", but the way the questions are set at ISC2, its "what is the
BEST answer out of this list". By the end of the 6 hours your eyes are
glossed over as you actually had to think. But its still better than
the 1-2 hr absolute answer exams from many orgs.

I think where Gary nailed it on the head is you have to be a good
developer BEFORE you can be a good at secdev. Poorly written code can
not be trusted. It cannot be safe. The rest is moot.

I have never been one to trust a piece of paper. Education comes from
doing. Book knowledge cannot be the only weapon in a secdev's
experience portfolio. He needs war wounds. Real scars of experience.
He needs to learn from his own experience and apply that as the field
matures and grows. I see far too many people who think because they
opened Ken Van Wyk's, Michael Howard's or Gary McGraw's books that
they now get secdev. Without actually applying that knowledge
transfer. Review their code, and its far from absolute. Especially in
failure code paths. Don't get me wrong... its essential reading. But
its not enough. Doing is.

In the immortal words of Yoda... "Do or do not. There is no try.".

I wonder if a bigger problem is that corps are relying on these
certifications to weed out the bad apples? Does NOT having CSSLP mean
the candidate sucks at secdev? Or the reverse, can anyone who passed
the CSSLP be trusted to get it right all the time? Absolute security
is a fallacy. As is perfect code. With enough money and motive,
anything can be breached. A piece of paper won't stop that. Nor that
crappy piece of code that I didn't properly threat model 15 years ago
that is still in use today.

-- 
Regards,
Dana Epp
Microsoft Security MVP

On Wed, Apr 14, 2010 at 8:24 AM, Wall, Kevin  wrote:
>
> Gary McGraw wrote...
>
>> Way back on May 9, 2007 I wrote my thoughts about
>> certifications like these down.  The article, called
>> "Certifiable" was published by darkreading:
>>
>> http://www.darkreading.com/security/app-security/showArticle.jhtml?articleID=208803630
>
> I just reread your Dark Reading post and I must say I agree with it
> almost 100%. The only part where I disagree with it is where you wrote:
>
>        The multiple choice test itself is one of the problems. I
>        have discussed the idea of using multiple choice to
>        discriminate knowledgeable developers from clueless
>        developers (like the SANS test does) with many professors
>        of computer science. Not one of them thought it was possible.
>
> I do think it is possible to separate the clueful from the clueless
> using multiple choice if you "cheat". Here's how you do it. You write
> up your question and then list 4 or 5 INCORRECT answers and NO CORRECT
> answers.
>
> The clueless ones are the ones who just answer the question with one of
> the possible choices. The clueful ones are the ones who come up and argue
> with you that there is no correct answer listed. ;-)
>
> -kevin
> ---
> Kevin W. Wall           Qwest Information Technology, Inc.
> kevin.w...@qwest.com    Phone: 614.215.4788
> "It is practically impossible to teach good programming to students
>  that have had a prior exposure to BASIC: as potential programmers
>  they are mentally mutilated beyond hope of regeneration"
>    - Edsger Dijkstra, How do we tell truths that matter?
>      http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html
>
> This communication is the property of Qwest and may contain confidential or
> privileged information. Unauthorized use of this communication is strictly
> prohibited and may be unlawful.  If you have received this communication
> in error, please immediately notify the sender by reply e-mail and destroy
> all copies of the communication and any attachments.
>
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
> ___
>

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/m

Re: [SC-L] Change of position

2004-04-01 Thread Dana Epp
God I hate April 1st.

I had to take a double take on this for a second when I saw that it was 
you posting such filth Gary.

Gary McGraw wrote:

Hi all,

I have done lots of soul searching lately and have come to the
conclusion that trying to make software secure is not worth the effort.
I think instead we should concentrate more effort on protection
technologies such as advanced stateful firewalls, intrusion detection
mechanisms, host-based behavior control, and above all policy.  We
simply can't make software work effectively in a cost effective manner.
I hope all of you will agree.  

My plan is to create a new mailing list (hope Ken lets this one by)
called nsbsc-l [network-security-beats-secure-coding-list].  Look for
more information about that from me soon.
gem

Gary McGraw, Ph.D.
CTO, Cigital
http://www.cigital.com


--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]



Re: [SC-L] Interesting article on the adoption of Software Security

2004-06-11 Thread Dana Epp
Ok, lets turn the tables a bit here. We talked about this a bit back 
last December when I said that you need to use the right tool for the 
right job, and to quit beating on C.

For those of us who write kernel mode / ring0 code, what language are 
you suggesting we write in? Name a good typesafe language that you have 
PRACTICALLY seen to write kernel mode code in. Especially on Windows and 
the Linux platform. I am not trying to fuel the argument over which 
language is better, it comes down to the right tool for the right job. I 
know back in December ljknews suggested PL/I and Ada, but who has 
actually seen production code in either Windows or Linux using it?

Lets face it. You aren't going to normally see Java or C# in kernel code 
(yes I am aware of JavaOS and some guys at Microsoft wanting to write 
everything in their kernel via managed code) but its just not going to 
happen in practice. C and ASM is the right tool in this area of code.

I said this back in December and think its worth repeating. What is the 
C language downfall is also its best strength. It is a double edged 
sword that really SHOULD be mastered by those who need it, but by many 
is treated like a child's $5 plastic toy... wielded by the inexperienced 
who don't know any better. The reality is instead of avoiding it, we 
should include the proper teachings to use it safely, and correctly. I 
think that if we try to sidestep the issue, we will end up using the 
wrong tool at the wrong time. We shouldn't fear using languages like C 
and C++, we just need to know its place, know its fallibilities and deal 
with it.

Cripin is right; new code SHOULD be written in a type safe language 
unless there is a very strong reason to do otherwise. The reality is 
that many developers don't know when that right time is. And resulting 
is poor choice in tools, languages and structure. I'd love for someone 
to show me... no... convince me, of a typesafe language that can be used 
in such a place. I have yet to see it for production code, used on a 
regular basis.

Now whats interesting is that some people are starting to get this. If 
you look at some of the latest DDK builds coming out of Microsoft you 
now see advancements in tools to handle this. Tools like prefast can do 
a lot to analyze code, and the new Static Driver Verifier goes to the 
next level when tracing code execution paths and checking for faults in 
drivers traditionally written in C. They further extend that with safer 
string functions () and deeper inspection in code as well 
as lots of training to bring people up to skill in secure programming 
through some of their MSDN webcasts. Now, I am NOT saying Microsoft is 
the company I would look to for a model in this area, but I am seeing 
the effort there. The trick is actually educating the developers to use 
the tools, and use them properly. (RATS and StackGuard were some good 
ones Crispin pointed out).

Its the right tool for the right job. And although you can pound a 
square peg through a round hole if you beat it hard enough... it doesn't 
mean its the right thing to do. Nor is right to assume you can use 
typesafe languages as the panacea for secure coding.

--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]



Re: [SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-06 Thread Dana Epp
I'd be interested to hear what people think of the two approaches (separate
security courses vs. spreading security all over the curricula).
Regards.
Fernando.
Well, I have been asked to teach a new forth year course at the British 
Columbia Institute of Technology (BCIT) this fall on Secure Programming 
(COMP4476). I have no problem sharing my course outline and breakdown, 
since a lot of this is adopted from experiences many other structured 
secure programming courses and literature are taking. The idea is that 
students need to build a strong foundation of learning that they can 
adopt in which ever discipline they follow in the future. This shouldn't 
be a first year course, but I think its a bit late being a fourth year 
course. You will note that the course I am teaching is somewhat language 
agnostic, and even platform agnostic to ensure that the foundation isn't 
tainted with 'fad-of-the-day' techniques, technologies and tools. (Hold 
that to Web applications, which the jury is still out on)

Course Breakdown

1. Essentials of Application Security
* Types of attacks (hackers, DoS, Viruses, Trojans, Worms, 
organizational attacks etc)
* Consequences of Poor Security (Data theft, lost productivity, damage 
reputation, loose consumer confidence, lost revenues)
* Challenges when Implementing Security (Security vs Usability, 
Attackers vs Defenders, The misinformation about the security cost)

2. Secure Application Development Practices
* Implementing security at every stage of the development process
* Designing clean error code paths, and fail securely
* Planning on Failure through results checking
* Code review
3. Threat Modeling
* Attack Trees
* STRIDE Threat Modeling
* DREAD risk analysis
4. Using Security Technologies
* Encryption
* Hashing
* Digital signatures
* Digital certificates
* Secure communications (Using IPSec/SSL)
* Authentication
* Authorization
5. Detecting and fixing Memory and Arithmetic Issues
* Buffer overflows
* Heap overflows
* Integer overflows
6. Defending against Faulty Inputs and Tainted Data
* User input validation techniques
* Regular expressions
* Parameter checking
* Fault injection reflection
7. Design, Develop and Deploy software through least privilege
* Running in least privilege
* Developing and debugging in least privilege
* Providing secure defaults using the Pareto Principle
* Applying native OS security contexts to processes and files (ACL, 
perms etc)

8. Securing Web applications
* C14N
* SQL Injection
* Cross-site scripting
* Parameter checking
As I complete the lesson plan this summer this outline will change. I 
think more study on understanding trusted and untrusted boundaries needs 
to be added, and some areas such as Threat Modeling will be flushed out 
with more detail. Overall though, you can get an idea of areas of 
education that I feel make up a core foundation of learning in secure 
programming. I wish I could take credit for this thinking, as its a 
strong foundation to build on. Alas I cannot; pick up any number of 
secure coding books and realize that this is all covered there in some 
degree:

* Building Secure Code
* Secure Coding Principles & Practices
* Secure Programming Cookbook
* Security Engineering
* Building Secure Software
I only wish I could make all these books be textbook requirements for 
the curriculum. It should be mandatory reading. Although you can teach 
some aspects in any course being provided, the reality is I think a 
dedicated course helps to build on the real foundation of learning. All 
other courses can re-enforce these teachings to further drive it home 
for the student.

Of course, I also think students should have to take at least one course 
in ASM to really understand how computer instructions work, so they can 
gain a foundation of learning for the heart of computer processing. And 
I think they should be taught the powers and failures of C. Since I know 
many of you think I'm nuts for that, you might want to look at this 
outline with the same amount of consideration.

--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]



Re: [SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-06 Thread Dana Epp
Thanks Mark.
As a correction to my book list (and a big apology to Michael Howard), 
that should say "Writing Secure Code". I should know better than to spew 
forth vile discussions on education and not proof read my own work. 
Never noticed it until your response. *sigh*

Now, I just need to convince the dean that all those books should be 
required reading, and that my code auditing is better than my email 
proofreading :)

Mark Rockman wrote:
You are not nuts.  Your course outline  is a very substantial step in the
right direction.
- Original Message - 
From: "Dana Epp" <[EMAIL PROTECTED]>
To: "Fernando Schapachnik" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, July 06, 2004 16:42
Subject: Re: [SC-L] Education and security -- another perspective (was "ACM
Queue - Content")


I'd be interested to hear what people think of the two approaches
(separate
security courses vs. spreading security all over the curricula).
Regards.
Fernando.
Well, I have been asked to teach a new forth year course at the British
Columbia Institute of Technology (BCIT) this fall on Secure Programming
(COMP4476). I have no problem sharing my course outline and breakdown,
since a lot of this is adopted from experiences many other structured
secure programming courses and literature are taking. The idea is that
students need to build a strong foundation of learning that they can
adopt in which ever discipline they follow in the future. This shouldn't
be a first year course, but I think its a bit late being a fourth year
course. You will note that the course I am teaching is somewhat language
agnostic, and even platform agnostic to ensure that the foundation isn't
tainted with 'fad-of-the-day' techniques, technologies and tools. (Hold
that to Web applications, which the jury is still out on)
Course Breakdown

1. Essentials of Application Security
* Types of attacks (hackers, DoS, Viruses, Trojans, Worms,
organizational attacks etc)
* Consequences of Poor Security (Data theft, lost productivity, damage
reputation, loose consumer confidence, lost revenues)
* Challenges when Implementing Security (Security vs Usability,
Attackers vs Defenders, The misinformation about the security cost)
2. Secure Application Development Practices
* Implementing security at every stage of the development process
* Designing clean error code paths, and fail securely
* Planning on Failure through results checking
* Code review
3. Threat Modeling
* Attack Trees
* STRIDE Threat Modeling
* DREAD risk analysis
4. Using Security Technologies
* Encryption
* Hashing
* Digital signatures
* Digital certificates
* Secure communications (Using IPSec/SSL)
* Authentication
* Authorization
5. Detecting and fixing Memory and Arithmetic Issues
* Buffer overflows
* Heap overflows
* Integer overflows
6. Defending against Faulty Inputs and Tainted Data
* User input validation techniques
* Regular expressions
* Parameter checking
* Fault injection reflection
7. Design, Develop and Deploy software through least privilege
* Running in least privilege
* Developing and debugging in least privilege
* Providing secure defaults using the Pareto Principle
* Applying native OS security contexts to processes and files (ACL,
perms etc)
8. Securing Web applications
* C14N
* SQL Injection
* Cross-site scripting
* Parameter checking
As I complete the lesson plan this summer this outline will change. I
think more study on understanding trusted and untrusted boundaries needs
to be added, and some areas such as Threat Modeling will be flushed out
with more detail. Overall though, you can get an idea of areas of
education that I feel make up a core foundation of learning in secure
programming. I wish I could take credit for this thinking, as its a
strong foundation to build on. Alas I cannot; pick up any number of
secure coding books and realize that this is all covered there in some
degree:
* Building Secure Code
* Secure Coding Principles & Practices
* Secure Programming Cookbook
* Security Engineering
* Building Secure Software
I only wish I could make all these books be textbook requirements for
the curriculum. It should be mandatory reading. Although you can teach
some aspects in any course being provided, the reality is I think a
dedicated course helps to build on the real foundation of learning. All
other courses can re-enforce these teachings to further drive it home
for the student.
Of course, I also think students should have to take at least one course
in ASM to really understand how computer instructions work, so they can
gain a foundation of learning for the heart of computer processing. And
I think they should be taught the powers and failures of C. Since I know
many of you think I'm nuts for that, you might want to look at this
outline with the same amount of consideration.
--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]


--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]



Re: [SC-L] Education and security -- plus safety, reliability and availability

2004-07-08 Thread Dana Epp
Hey Jim,
All good points. I haven't seen that book and will have to see about 
grabbing it.

Jim & Mary Ronback wrote:
Dana Epp wrote:
 I think they should be taught the powers and failures of C.

Your course sounds enticing. I'm tempted to sign up for it.
Your course should also make a clear distinction between security, 
safety, reliability and availability.
One can write secure code that is not safe and vice versa - and one can 
write reliable code that is not safe and vice versa - and one can write 
reliable code that is not secure and vice versa. Finally one can write 
code that is secure, safe, reliable but not robust and vice versa. In 
many instances the software requirements, design and implementation 
concerns for security, safety,  reliability and availability overlap.  
Safety ensures that bad things do not  happen. Security ensures that  
unauthorized access to information is not allowed. Reliability ensures 
that the system and its software  behaves as specified during a given 
interval of time.  Availability ensures that the system and its software 
are not unavailable for use for more than a given period of time. Higher 
availability is provided by failure tolerance to system and software 
failures and human error.

I suspect that C has a pervasive hold because a large amount of legacy C 
code exists. When modifying or enhancing existing C code one should use 
a safer subset either by enforcing coding standards like in Safer C or 
eliminating a large class of errors which are not allowed by some newer 
C compilers, e.g., Safe C and Cyclone. If I had my druders, instead of 
using C, I would chose Spark - the safer Ada subset which allows you to 
guarantee that there are no runtime errors.

But if you are stuck with C, you should consider adding the following 
book to the reading list in your security course. It provides an 
extensive list of all the shortcomings and hazards of C.
(1995) Safer C: Developing for High-Integrity and Safety-Critical 
Systems/ by Hatton, L. 
/http://www.oakcomp.co.uk/SCT_About.html  provides a corresponding 
toolset to be used with Safer C/
/<>
"The Safer C toolset goes to considerable effort to enforce the 
well-known MISRA C standard. The MISRA C standard was developed by a 
consortium of car manufacturers with the intention of introducing the 
notion of safer language subsets for programmable control systems in the 
auto industry."

There is also some interesting research to make C compilers safer but I 
have not had any experience using them:

http://www.cs.wisc.edu/~austin/scc.html
http://www.cs.wisc.edu/~austin/talk.scc/
<>SCC: The Safe C Compiler -SCC is an optimizing C-to-C compiler which 
implements the extended pointer and array access semantics needed to 
provide efficient, reliable and immediate detection of memory access 
errors in /unbridled/ C codes.

http://www.zork.org/safestr/ - Provides a safer string handling library 
for for Safe C.

http://www.research.att.com/projects/cyclone/ provides another safer C 
dialect. Here is a excerpt from their introduction:

"Cyclone is a language for C programmers who want to write secure, 
robust programs. *It's a dialect of C designed to be /safe/: free of 
crashes, buffer overflows, format string attacks*, and so on. Careful C 
programmers can produce safe C programs, but, in practice, many C 
programs are unsafe. Our goal is to make /all/ Cyclone programs safe, 
regardless of how carefully they were written. All Cyclone programs must 
pass a combination of compile-time, link-time, and run-time checks 
designed to ensure safety.

There are other safe programming languages, including Java, ML, and 
Scheme. Cyclone is novel because its syntax, types, and semantics are 
based closely on C. This makes it easier to interface Cyclone with 
legacy C code, or port C programs to Cyclone. And writing a new program 
in Cyclone ``feels'' like programming in C: Cyclone tries to give 
programmers the same control over data representations, memory 
management, and performance that C has."

Yours safely,
Jim Ronback
Tsawwassen, BC

--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]



Re: [SC-L] Education and security -- another perspective (was "ACM Queue - Content")

2004-07-08 Thread Dana Epp
What is wrong with this picture ?
I see both of you willing to mandate the teaching of C and yet not
mandate the teaching of any of Ada, Pascal, PL/I etc.
This seems like the teaching of "making do".
Hmmm, interesting point. In a particular set of learning objectives 
required to complete a credential (ie: CompSci, CIS etc) what do you 
recommend we sacrifice to put in all this teaching?

I don't pick C for C's sake. I choose C because ON AVERAGE, most 
students will be exposed to C more than the languages you suggest. 
Especially in the majority on industries hiring students out of university.

However, that said, I don't think the language matters past exposure to 
the industry. A strong foundation of programming skills should be 
language agnostic; loops are loops, recursion is recursion, conditions 
are conditions etc. Learning the syntax of the language to accomplish it 
is secondary. Knowing how a loop breaks down into machine instructions 
is the goal here. Not how to do it in Ada.

Think about it in reflection of a linguist doing translation at the 
United Nations. They didn't simply go and learn every particular 
language. They are trained in understanding the mechanisms of human 
speech and formal grammar, and they then apply it to the language they 
are learning. In other words, they work from their foundation of 
learning in grammar and then apply the syntax of the particular language 
they are translating. It makes learning new languages much easier, and 
much faster.

So too should be programming. If a student has a strong foundation of 
learning when it comes to programming, they can adapt to different 
computer languages that they are exposed to as it comes to them. C is a 
perfect language to use to quickly get those concepts across in a 
practical environment in universities. And more importantly, from a 
secure coding objective, you can show what NOT to do.

--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]


Re: [SC-L] Programming languages used for security

2004-07-10 Thread Dana Epp
My what a week of interesting discussions. Lets end this week on a good 
and light hearted note.

Admit it. We all know the most secure programming language is Logo anyways.
It's hip to be 'rep 4 [ fwd 50 rt 90]'
Laugh. Or the world laughs at you. Have a good weekend guys.
Crispin Cowan wrote:
David Crocker wrote:
1. Is it appropriate to look for a single "general purpose" programming
language? Consider the following application areas:
a) Application packages
b) Operating systems, device drivers, network protocol stacks etc.
c) Real-time embedded software
The features you need for these applications are not the same. For 
example,
garbage collection is very helpful for (a) but is not acceptable in 
(b) and (c).
For (b) you may need to use some low-level tricks which you will not 
need for
(a) and probably not for (c).
 

I agree completely that one language does not fit all. But that does not 
completely obviate the question, just requires some scoping.

2. Do we need programming languages at all? Why not write precise 
high-level
specifications and have the system generate the program, thereby 
saving time and
eliminating coding error? [This is not yet feasible for operating 
systems, but
it is feasible for many applications, including many classes of embedded
applications].
 

The above is the art of programming language design. Programs written in 
high-level languages are *precisely* specifications that result in the 
system generating the program, thereby saving time and eliminating 
coding error. You will find exactly those arguments in the preface to 
the K&R C book.

Crispin

--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]



Re: [SC-L] Exploiting Software: How to Break Code

2004-11-11 Thread Dana Epp
George,
I wrote a review about the book on my blog at: 
http://silverstr.ufies.org/blog/archives/000592.html

Not sure if thats what you are looking for, but take a look if you are 
looking for a book review style view of it.

- Dana
- Original Message - 
From: "Greenarrow 1" <[EMAIL PROTECTED]>
To: "sc-l" <[EMAIL PROTECTED]>
Sent: Wednesday, November 10, 2004 6:11 PM
Subject: [SC-L] Exploiting Software: How to Break Code


Does anyone have any comments about this book?  I have read some reviews 
but it is on the site advertising the book for sale   They stated that 
this book is a
must for anyone wanting to harden code in programs, softwares and 
hardwares
but then that could just be a sales pitch.  I would like to see some posts 
both pro and con about this book.

Exploiting Software: How to Break Code
By Greg Hoglund, Gary McGraw.
Published by Addison Wesley Professional
Regards,
George
Greenarrow1
InNetInvestigations-Forensics 




Re: [SC-L] How do we improve s/w developer awareness?

2004-11-12 Thread Dana Epp
I think we have to go one step further.
Its nice to know what the attack patterns are. A better thing to do is to know how to identify them 
during threat modeling, and then apply safeguards to mitigate the risk. ie: We need a merge of 
thoughts from "Exploiting Software" and "Building Secure Software" into a 
single source... where attack and defense can be spoken about together.
We all like to spout out that until you know the threats to which you are 
susceptible to, you cannot build secure systems. The reality is, unless you 
know how to MITIGATE the threats... simply knowing they exist doesn't do much 
to protect the customer.
Gary McGraw wrote:
One of the reasons that Greg Hoglund and I wrote Exploiting Software was
to gain a basic underdstanding of what we call "attack patterns".  The
idea is to abstract away from platform and language considerations (at
least some), and thus elevate the level of attack discussion.
We identify and discuss 48 attack patterns in Exploiting Software.  Each
of them has a handful of associated examples from real exploits.  I will
paste in the complete list below.  As you will see, we provided a start,
but there is plenty of work here remaining to be done.
Perhaps by talking about patterns of attack we can improve the signal to
noise ratio in the exploit discussion department.
gem
Gary McGraw, Ph.D.
CTO, Cigital
http://www.cigital.com
WE NEED PEOPLE!
Make the Client Invisible
Target Programs That Write to Privileged OS Resources 
Use a User-Supplied Configuration File to Run Commands That Elevate
Privilege 
Make Use of Configuration File Search Paths 
Direct Access to Executable Files 
Embedding Scripts within Scripts 
Leverage Executable Code in Nonexecutable Files 
Argument Injection 
Command Delimiters 
Multiple Parsers and Double Escapes 
User-Supplied Variable Passed to File System Calls 
Postfix NULL Terminator 
Postfix, Null Terminate, and Backslash 
Relative Path Traversal 
Client-Controlled Environment Variables 
User-Supplied Global Variables (DEBUG=1, PHP Globals, and So Forth) 
Session ID, Resource ID, and Blind Trust
Analog In-Band Switching Signals (aka "Blue Boxing") 
Attack Pattern Fragment: Manipulating Terminal Devices 
Simple Script Injection 
Embedding Script in Nonscript Elements 
XSS in HTTP Headers 
HTTP Query Strings 
User-Controlled Filename 
Passing Local Filenames to Functions That Expect a URL 
Meta-characters in E-mail Header
File System Function Injection, Content Based
Client-side Injection, Buffer Overflow
Cause Web Server Misclassification
Alternate Encoding the Leading Ghost Characters
Using Slashes in Alternate Encoding
Using Escaped Slashes in Alternate Encoding 
Unicode Encoding 
UTF-8 Encoding 
URL Encoding 
Alternative IP Addresses 
Slashes and URL Encoding Combined 
Web Logs 
Overflow Binary Resource File 
Overflow Variables and Tags 
Overflow Symbolic Links 
MIME Conversion 
HTTP Cookies 
Filter Failure through Buffer Overflow 
Buffer Overflow with Environment Variables 
Buffer Overflow in an API Call 
Buffer Overflow in Local Command-Line Utilities 
Parameter Expansion 
String Format Overflow in syslog() 



This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.
--------

--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]



Re: [SC-L] Secured Coding

2004-11-13 Thread Dana Epp
George,
I truly believe this as no matter how secured we make our programs there 
will always be someone to figure how to break it.
Like most things in information security its about risk mitigation, NOT risk avoidance. We can sit and profile the adversary till we are blue in the face and assume we know how he will think; the landscape always changes and we will at times miss something. 

But that doesn't mean we stop trying. Secure software engineering is still in its infancy. We will continue to have failures. What will make the difference is how we learn from it, adapt and move forward. Having a poor track record to this point doesn't help. Things like buffer overflows have been around for over 20 years and many developers still haven't figured it out. 

But that doesn't mean we stop trying. We need to approach it with a higher mindset. Instead of worrying about the next great technical safeguard or uber coding technique we have to really understand how software works. How it is exploited and how we can mitigate the risks associated with various attack pattern TYPES. 

I say TYPES as I think it goes beyond what people like Gary McGraw outline. Commonalities in the patterns continue to be ignored as systems get more obscured with higher level language that hide what is going on. New developers are emerging without having a REAL understanding of what is going on under the hood, having a false sense of security because they have been told that  is safer to code in. 

Sometimes we need to reflect on history and try to learn from it. We like to tout that defense in 
depth in any environment is a good idea. Yet do we actually use that thinking in software? Do we 
actually understand what that means? To me I would rather have three smaller walls than one BIG 
one. Why? Because I will typically know when the first wall is breached, giving me time to REACT. 
How much software today simply implements a single safeguard and think its safe? Its not 
acceptable, and thats not a failing of the discipline. Its the failing of people who don't know any 
better. They read "Writing Secure Code" or "Building Secure Software" and think 
they have all the answers, when there is much, much more. And whats sad, is most developers don't 
even go that far.
We need to reduce, redirect or eliminate the impacts of attacks, and that goes beyond 
simply writing "secure" code. We need to apply the thinking to configuration, 
to deployment AND to design. Microsoft calls it SD3+C. Its one of the concepts I LIKE 
coming out of there. Secure by design, secure by default and secure in deployment. (I 
won't get into the +C here). Yet I know MANY people don't even think about that when 
writing software. Heck, many still WRITE code as an Administrator on a Windows system for 
gods sake. And deploy software requiring similar rights when its not needed. *sigh*
Now I know I am preaching to the choir. You guys know all this stuff. But the 
people out there DON'T. And now we are full circle in the discussion of 
education.
George, you are right that there will always be an adversary that will be able to break our safeguards. The trick is to apply the appropriate safeguards in the right places to make it much more difficult for the attacker, so that they will move on to easier targets. Its not about have the BEST security in the world, its about having "just enough" security to mitigate the risks you wish to protect against. Does it make sense to spend $500,000k on developing a crypto schema for a P2P file sharing app? Probably not. Would you want to apply that to something protecting critical infrastructure. Probably. 

Think of it as a chess game with a twist. We always have to think ahead of the adversary, thinking moves ahead of what they are going to do. Unfortunately they can move their pawns backwards, giving them an unfair advantage. And at times, get ahead of us. But that doesn't mean we stop trying. 

--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]



[no subject]

2004-12-02 Thread Dana Epp
<[EMAIL PROTECTED]>
Subject: Re: [SC-L] How do we improve s/w developer awareness?
Date: Thu, 2 Dec 2004 12:52:35 -0800
Sender: [EMAIL PROTECTED]
Precedence: bulk
Mailing-List: contact <[EMAIL PROTECTED]> ; run by MajorDomo
List-Id: Secure Coding Mailing List 
List-Post: 
List-Subscribe: 
List-Unsubscribe: 
List-Help: 
List-Archive: 
Delivered-To: mailing list [EMAIL PROTECTED]
Delivered-To: moderator for [EMAIL PROTECTED]

I think we also have to realize that bridge building has had centuries of 
time to evolve, and learn from its mistakes. Secure software engineering as 
a discipline is still in its infancy. I would love to see the quality of 
bridges in its first 50 years of development.

That's of course no excuse for the current state of software development. 
But comparisons like this are like statistics... 86.12345% of them are made 
up, or have no sane correlation.

- Original Message - 
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, December 02, 2004 8:25 AM
Subject: Re: [SC-L] How do we improve s/w developer awareness? 

>I have to say I find your comparison between bridge engineers and software
> engineers rather troubling.
>
> In response to your question:
>
>  'Would you accept "it was too hard to do a stress analysis" from the
> engineer designing a bridge?'
>
> I think, regrettably, we probably would do these days.
>
> Remember that little incident in 2000 when the London millennium bridge 
> was
> closed immediately after opening due to excessive wobbling when people
> walked across it? I can't guarantee that my recollection is accurate, but
> I'm sure they were trying to put this down to that software classic, a
> 'Design feature'.
>
> Seems that far from Software Engineers taking the bridge engineers
> approach, we may be seeing the exact reverse happening. :-)
>
> --
> Graham Coles.



Re: [SC-L] "Tech News on ZDNet" -- OS makers: Security is job No. 1

2005-05-11 Thread Dana Epp
on on the perceived simplicity that the user needs in 
their operating systems (and applications). However, I don't believe it can 
change over night. Which is why I think MS may be more successful then we 
realize in promoting security to consumers as their security management 
lifecycle touches everyone, and everything that works with them. These things 
took decades to build up and break. It won't be fixed over night.
Sorry for the long post. This is a topic that drives me nuts. Everyone has their own views that typically are painted in a little black box. (including mine) We have to step back sometimes and look at the bigger picture here. This is a great list Ken runs about secure coding. Most (if not all) of us on the list GET why secure programming is important. But many don't weight that technological decision against the real business ones that corporations need to make. Its tricky to weigh things accordingly to protect business viability and fiscal responsibility while protecting customers. Especially when management buy in isn't always available. We know the realities of cost savings and ROI on designing security in. But most out there do not. And blanket statements about people wanting to make money off of security are futile without digging deeper to WHY they appear to be doing that. 

At least, thats my opinion on it anyways. YMMV. 

--
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]
Gizmo wrote:
Microsoft is all about making Windows 'more secure' because they see a
potential revenue stream.  Note that their approach is NOT "Let's make the
OS more secure so that this crap can't get installed to start with"; rather,
it is "Let's graft more crap onto the system and then sell people a
subscription so that they can be protected from the problems we have
created, at least most of the time".
To be sure, I like Apple's approach even less.  "We want to help the
customer protect their computer"?!
I realize that security requires the cooperation of the user, but providing
the typical user with a readily available list of the processes running in
the system isn't going to do anything but confuse the poor user.
We need to remember that users are generally illiterate when it comes to the
details of how their computer functions.  That's why they are USERS.  They
don't know (or care) how or why their computer works.  All they care about
is that it does what they need for it to do.  Quite frankly, that is all
they really SHOULD have to care about.  It is not necessary for me to
understand all the gory intimate details of how my car works in order for me
to use it in a safe fashion.  The same should be true of my computer.
I dunno, maybe I'm way off base and just too cynical for my own good, but
that's the way I see it.
Later,
Chris
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Kenneth R. van Wyk
Sent: Tuesday, May 10, 2005 6:37 AM
To: Secure Coding Mailing List
Subject: [SC-L] "Tech News on ZDNet" -- OS makers: Security is job No. 1
FYI, somewhat interesting story today on ZDNet (see
http://news.zdnet.com/2100-1009_22-5697133.html?tag=st.prev) about
operating system makers paying more attention to security.  Note the
differing (public)
statements by Microsoft and Apple...
Being fundamentally a "glass half full" sort of person, I think that it's
refreshing to hear that OS vendors are making their products' security a
higher priority than it's typically been in the past.  There's also an
implicit message here regarding a proactive software security posture vs.
"firewall and IDS it" after the product is released.
Cheers,
Ken van Wyk
--
KRvW Associates, LLC
http://www.KRvW.com



RE: [SC-L] Managing the insider threat through code obfuscation

2005-12-15 Thread Dana Epp
Title: Re: [SC-L] Managing the insider threat through code obfuscation






There are no absolutes 
here. Obfuscation has its place. I use Xenocode's Obfuscator for my .NET code. I 
do not do it to try to hide bad code from intense scrutiny from potential 
attackers. I do it to hide business logic from the looky-loos who have tools 
like Reflector and may want to blatently rip it off. (which has happened 
before)
 
Weighing my risks accordingly, I expect 
that once the byte code is converted to native instructions on the target 
system, that any competent person can easily look at the disassembly and do 
their bidding. If they are gods with disassemblers, all the power to them. But 
they can do that with any code on the system, be it driven from bytecode 
converted to native or directly linked with the linker. So 
the threat level is the same at that point.
 
Obfuscation is just another layer of 
defense for business logic from what I would consider 'layer one' 
attackers. Depending on the tools you use it can work, and work well. It's 
designed to mitigate against a certain type of threat. However, anyone with 
enough patience, a copy of Gary and Greg's 
"Exploiting Software" and a copy of IDA Pro will be able to break the 
shackles of such tools and get deep into the bowels of the executing code. 
Nothing is going to stop that.
 
Now let's look at your original example of 
being able to get a great deal of information about the internals of 
databases etc. The reality is that such information shouldn't be IN 
the application in the first place. When possible secrets like db passwords 
should use the native operating system's security mechanisms (ie: Trusted 
connections, Windows Authentication, DPAPI etc when working 
on Windows) to store and manage credentials. Rather 
that concatinating query strings (which are harder to obfuscate btw) try to 
use things like stored procedures (where appropriate) that keeps the 
db structure on the server while at the same time giving another layer of 
validation control for input. I know I am preaching to the choir here. 

 
My point is that no matter what the 
language is, and what the threat is you are expecting to mitigate against, 
different tools can do different things. Obfuscation has it's place. As does a 
deep hole, some cement and a shovel. You can dig your own bunker however you see 
fit. How strong you make it depends on what sort of attack you are fretting 
about.
 


---
Regards,
Dana Epp
[Blog: http://silverstr.ufies.org/blog/]


From: [EMAIL PROTECTED] on behalf of 
Kenneth R. van WykSent: Thu 12/15/2005 7:09 AMTo: Jose 
NazarioCc: Secure Coding Mailing ListSubject: Re: [SC-L] 
Managing the insider threat through code obfuscation

On Thursday 15 December 2005 09:26, Jose Nazario wrote:> 
if the person can develop exploits against the holes in the code, what> 
makes you think they can't fire up a runtime debugger and trace the code> 
execution and discover the same things?Nothing makes me think that at 
all; in fact, I was quite skeptical of thevarious product claims, which is 
why I wanted to hear about others'experience with 
them.Cheers,Ken--KRvW Associates, LLChttp://www.KRvW.com___Secure 
Coding mailing list (SC-L)SC-L@securecoding.orgList information, 
subscriptions, etc - http://krvw.com/mailman/listinfo/sc-lList 
charter available at - http://www.securecoding.org/list/charter.php


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-03 Thread Dana Epp
Title: Re: [SC-L] Bugs and flaws






I think I would word that 
differently. The design defect was when Microsoft decided to allow meta data to 
call GDI functions. 
 
Around 1990 when this was 
introduced the threat profile was entirely different; the operating system could 
trust the metadata. Well, actually I would argue that they couldn't, but no one 
knew any better yet. At the time SetAbortProc() was an important function to 
allow for print cancellation in the co-operative multitasking environment that 
was Windows 3.0.
 
To be clear, IE was NOT 
DIRECTLY vulnerable to the WMF attack vector everyone likes to use as a test 
case for this discussion. IE actually refuses to process any type of metadata 
that supported META_ESCAPE records (which SetAbortProc relies on). Hence why its 
not possible to exploit the vulnerability by simply calling a WMF image via 
HTML. So how is IE vulnerable then? It's not actually. The attack vector uses IE 
as a conduit to actually call out to secondary library code that will process 
it. In the case of the exploits that hit the Net, attackers used an IFRAME hack 
to call out to the shell to process it. The shell would look up the handler for 
WMF, which was the Windows Picture Viewer that did the processing in 
shimgvw.dll. When the dll processed the WMF, it would convert it to a printable 
EMF format, and bam... we ran into problems.
 
With the design defect being 
the fact metadata can call arbitrary GDI code, the implementation flaw is the 
fact applications like IE rely so heavily on calling out to secondary libraries 
that just can't be trusted. Even if IE has had a strong code review, it is 
extremely probable that most of the secondary library code has not had the same 
audit scrutiny. This is a weakness to all applications, not just IE. When you 
call out to untrusted code that you don't control, you put the application 
at risk. No different than any other operating system. Only problem is Windows 
is riddled with these potential holes because its sharing so much of the same 
codebase. And in the past the teams rarely talk to each other to figure this 
out.
 
Code reuse is one thing, but 
some of the components in Windows are carry over from 15 years ago, and will 
continue to put us at risk due to the implementation flaws that haven't yet been 
found. But with such a huge master sources to begin with, its not something that 
will be fixed over night.
 


---
Regards,
Dana Epp [Microsoft Security 
MVP]
Blog: http://silverstr.ufies.org/blog/


From: [EMAIL PROTECTED] on behalf of 
Crispin CowanSent: Fri 2/3/2006 12:12 PMTo: Gary 
McGrawCc: Kenneth R. van Wyk; Secure Coding Mailing 
ListSubject: Re: [SC-L] Bugs and flaws

Gary McGraw wrote:> To cycle this all back around to the 
original posting, lets talk about> the WMF flaw in particular.  Do 
we believe that the best way for> Microsoft to find similar design 
problems is to do code review?  Or> should they use a higher level 
approach?>> Were they correct in saying (officially) that flaws 
such as WMF are hard> to anticipate?>  I have heard 
some very insightful security researchers from Microsoftpushing an abstract 
notion of "attack surface", which is the amount ofcode/data/API/whatever 
that is exposed to the attacker. To design forsecurity, among other things, 
reduce your attack surface.The WMF design defect seems to be that IE has 
too large of an attacksurface. There are way too many ways for 
unauthenticated remote webservers to induce the client to run way too much 
code with parametersprovided by the attacker. The implementation flaw is 
that the WMF API inparticular is vulnerable to malicious 
content.None of which strikes me as surprising, but maybe that's just me 
:)Crispin--Crispin Cowan, 
Ph.D.  
http://crispincowan.com/~crispin/Director 
of Software Engineering, Novell  http://novell.com    
Olympic Games: The Bi-Annual Festival of 
Corruption___Secure 
Coding mailing list (SC-L)SC-L@securecoding.orgList information, 
subscriptions, etc - http://krvw.com/mailman/listinfo/sc-lList 
charter available at - http://www.securecoding.org/list/charter.php


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] ddj: beyond the badnessometer

2006-07-13 Thread Dana Epp
Although pentesting isn't perfect, I think in the right scope it has the
potential of acting in a vital role in the development lifecycle of a
project. 

Building known attack patterns into a library which can be run against a
codebase has some merrit, as long as you understand what the resulting
expectations will be. As an example, I would consider automated
vulnerability assessment tools built to do input validation fuzzing to
be part of pentesting. And most pentesters out there use said tools for
that very reason.

I agree that pentesting at the START of a project is futile. Evaluating
the risks to the software by understanding its architecture, inputs and
flows goes much deeper and allows you to better find design flaws
instead of implementation bugs. But I am not so sure I would dismiss the
act of pentesting because of the badness-ometer factor. If we did, we
would also be dismissing things like static code analysis tools as they
may show similar results to many out there.

On a different tangent to this thread though, I don't think that the
BEST use of pentesting is for determining how secure your code is in the
first place. It is much more suited to allow you to stress test failure
code paths for different implementation configurations. No matter how
safe you write your code, pentesting can ferret out different scenarios
that come from deployment configuration problems. That is, if the
pentest tools and the user(s) of said tools know how to run through this
properly. As you mentioned in your article, too many people pass
themselves off as pentesting experts when they aren't. Just because they
CAN run Nessus doesn't mean they are good pentesters.

Its about using the right tool for the right job. As pentest tools
mature I think we will be able to use the growing attack libraries to
test against known patterns to eliminate the brain dead security bugs,
while allowing the tools to go deeper and ferret out problems as it
reaches more code coverage. The more we can automate that process and
make it part of our daily tests... the quicker it can expose 'known
problems' in a code base.

Can anyone recommend a tool that CAN tell you how good your security is?
I thought not.

Here I go quoting Spaf again.

"When the code is incorrect, you can't really talk about security. When
the code is faulty, it cannot be safe."

Problem is, no tool in the world is going to show green blinky lights to
tell you the code is safe. Human heuristics come into play here, and we
have to leverage what assets we have, both manual and automatic, to find
the faulty code and eliminate it. And pentesting is just another one of
those tools in the arsenal to help.

Regards,
Dana Epp 
[Microsoft Security MVP]
http://silverstr.ufies.org/blog/

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gary McGraw
Sent: Thursday, July 13, 2006 8:05 AM
To: Nash
Cc: Secure Coding Mailing List
Subject: RE: [SC-L] ddj: beyond the badnessometer

Excellent post nash.  Thanks!

I agree with you for the most part.  You have a view of pen testing that
is quite sophisticated (especially compared to the usual drivel).  I
agree with you so much that I included pen testing as the third most
important touchpoint in my new book "Software Security" www.swsec.com.
It is the subject of chapter 6.  All the code review and architectural
risk analysis in the world can still be completely sidestepped by poor
decisions regarding the fielded software.  Pen testing is ideal for
looking into that.

But there are two things I want to reiterate:
1) pen testing is a bad way to *start* working on software
security...you'll get much better traction with code review and
architectural risk assessment.  {Of course, what nash says about the
power of a live sploit is true, and that kind of momentum creation may
be called for in a completely new situation where biz execs need basic
clue.}
2) pen testing can't tell you anything about how good your security is,
only how bad it is.
3) never use the results of a pen test as a "punch list" to attain
security

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
book www.swsec.com








This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is
intended solely for the recipient and use by any other party is not
authorized.  If you are not the intended recipient (or otherwise
authorized to receive this message by the intended recipient), any
disclosure, copying, distribution or use of the contents of the
information is prohibited.  If you have received this electronic message
transmission in error, please contact the sender by reply email and
delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting d

Re: [SC-L] bumper sticker slogan for secure software

2006-07-18 Thread Dana Epp
Or perhaps less arrogance in believing "it won't sink".

Absolute security is a myth. As is designing absolutely secure software.
It is a lofty goal, but one of an absolute that just isn't achievable as
threats change and new attack patterns are found. Designing secure
software is about attaining a level of balance around software
dependability weighed against mitigated risks against said software to
acceptable tolerance levels, while at the same time ensuring said
software accomplishes the original goal... to solve some problem for the
user. 

On my office door is a bumper sticker I made. It simply says:

0x5

10 points to the first person to explain what that means. 


Regards,
Dana Epp 
[Microsoft Security MVP]
http://silverstr.ufies.org/blog/

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of SC-L Subscriber Dave
Aronson
Sent: Tuesday, July 18, 2006 7:53 AM
To: SC-L@securecoding.org
Subject: [SC-L] bumper sticker slogan for secure software

Paolo Perego [mailto:[EMAIL PROTECTED] writes:

 > "Software is like Titanic, pleople claim it was unsinkable. Securing
is  > providing it power steering"

But power steering wouldn't have saved it.  By the time the iceberg was
spotted, there was not enough time to turn that large a boat.  Perhaps
radar, but that doesn't make a very good analogy.  Maybe a thicker
tougher hull and automatic compartment doors?

-Dave




___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] bumper sticker slogan for secure software

2006-07-20 Thread Dana Epp
> yeah.
> but none of this changes the fact that it IS possible to write
completely secure code.
> -- mic

And it IS possible that a man will walk on Mars someday. But its not
practical or realistic in the society we live in today. I'm sorry mic,
but I have to disagree with you here.

It is EXTREMELY difficult to have code be 100% correct if an application
has any level of real use or complexity. There will be security defects.
The weakest link here is the human factor, and people make mistakes.
More importantly, threats are constantly evolving and what you may
consider completely secure today may not be tomorrow when a new attack
vector is recognized that may attack your software. And unless you wrote
every single line of code yourself without calling out to ANY libraries,
you cannot rely on the security of other libraries or components that
may NOT have the same engineering discipline that you may have on your
own code base. 

Ross Anderson once said that secure software engineering is about
building systems to remain dependable in the face of malice, error, or
mischance. I think he has something there. If we build systems to
maintain confidentiality, integrity and availability, we have the
ability to fail gracefully in a manner to recover from unknown or
changing problems in our software without being detrimental to the user,
or their data.

I don't think we should ever stop striving to reach secure coding
nirvana. But I also understand that in the real world we are still in
our infancy when it comes to secure software as a discipline, and we
still have much to learn before we will reach it. 


Regards,
Dana Epp
[Microsoft Security MVP]
http://silverstr.ufies.org/blog/

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] bumper sticker slogan for secure software

2006-07-21 Thread Dana Epp
Actually, Brian Shea got the points for emailing me that he knew it was
the system error "Access Denied".

An extra 10 points goes to Andrew van der Stock for his explaination
that:

"apparently the term originates from radio, where 5x5 means good
reception and good signal strength (in that order). So

0x5

means

- no reception ("0")
- good signal strength ("5")

ie, we're doing ok at getting our message out, but people aren't  
listening yet. "

That cracked me up. So fitting for this forum.


Regards,
Dana Epp 
[Microsoft Security MVP]
http://silverstr.ufies.org/blog/

-Original Message-
From: mikeiscool [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 20, 2006 3:25 PM
To: Wall, Kevin
Cc: Dana Epp; SC-L@securecoding.org
Subject: Re: [SC-L] bumper sticker slogan for secure software

> BTW, does anyone besides me think that it's time to put this thread to

> rest?

I do.

But i'm still waiting for my points from dana ...


> -kevin

-- mic

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-24 Thread Dana Epp
But secure software is not a technology problem, it's a business one.
Focused on people.

If smartcards were so great, why isn't every single computer in the
world equipped with a reader? There will always be technology safeguards
we can put in place to mitigate particular problems. But technology is
not a panacea here. 

There will always be trade-offs that will trump secure design and
deployment of safeguards. It's not about putting ABSOLUTE security in...
It's about putting just enough security in to mitigate risks to
acceptable levels to the business scenario at hand, and at a cost that
is justifiable. Smartcard readers aren't deployed everywhere as they
simply are too costly to deploy, against particular PERCEIVED threats
that may or not be part of an application's threat profile.

I agree that we can significantly lessen the technology integration
problem with computers. We are, after all, supposed to be competent
developers that can leverage the IT infrastructure to our bidding. The
problem is when we keep our head in the technology bubble without
thinking about the business impacts and costs, wasting resources in the
wrong areas.

It is no different than "network security professionals" that deploy
$30,000 firewalls to protect digital assets worth less than the computer
they are on. (I once saw a huge Checkpoint firewall protecting an MP3
server. Talk about waste.) Those guys should be shot for ever making
that recommendation. As should secure software engineers who think they
can solve all problems with technology without considering all risks and
impacts to the business.


Regards,
Dana Epp 
[Microsoft Security MVP]
http://silverstr.ufies.org/blog/

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of mikeiscool
Sent: Sunday, July 23, 2006 3:42 PM
To: Crispin Cowan
Cc: Secure Coding Mailing List
Subject: Re: [SC-L] "Bumper sticker" definition of secure software

> As a result, really secure systems tend to require lots of user 
> training and are a hassle to use because they require permission all
the time.

No I disagree still. Consider a smart card. Far easier to use then the
silly bank logins that are available these days. Far easier then even
bothering to check if the address bar is yellow, due to FF, or some
other useless addon.

You just plug it in, and away you go, pretty much.

And requiring user permission does not make a system harder to use (per
se). It can be implemented well, and implemented badly.


> Imagine if every door in your house was spring loaded and closed 
> itself after you went through. And locked itself. And you had to use a

> key to open it each time. And each door had a different key. That 
> would be really secure, but it would also not be very convenient.

We're talking computers here. Technology lets you automate things.


> Crispin

-- mic
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] Secure software education. Does it start with our tools?

2007-01-11 Thread Dana Epp
Hey guys,
 
Last month I blogged (http://silverstr.ufies.org/blog/archives/000989.html) 
about my disappointment with the fact that the new service pack for Visual 
Studio 2005, on Vista, suggests with a specific dialog box that you run the IDE 
as Administrator. (http://msdn2.microsoft.com/en-us/vstudio/aa972193.aspx).
 
The actual dialog box is alarming and misleading, because it really gives poor 
advice and the false impression that developers HAVE to be building software as 
Administrator. Am I being selfish in believing that this is the LAST thing we 
want to do when trying to educate developers to not write code with 
administrative privileges? I know you can simply uncheck the thing and move on, 
(as recommended by Michael Howard at 
http://blogs.msdn.com/michael_howard/archive/2007/01/04/my-take-on-visual-studio-2005-sp1-and-windows-vista.aspx),
 but the reality is that this guidance isn't helping us as we try to educate 
developers to write software requiring less privileges, when the tools we use 
suggest that it doesn't adhere to that!
 
For years we have been trying to educate developers to run with least privilege 
so they can build safer software in a more restricted environment. Particularly 
important in a Windows environment where quite a few attack vectors would be 
significantly lessened if the software would have simply required less 
privileges at design time. I fear that when developers see such guidance they 
will simply run all their tools in an elevated context, or worse yet turn off 
things like UAC altogether so they can go about their "daily business". Now, I 
am pretty sure that a lot of us on this list have been building software in 
least privilege environments for years. But what does this say to those that 
don't know any better when they see such dialog boxes when they start their 
tools?  
 
Microsoft has even written a Vista "Issue list" for when you run Visual Studio 
as a Standard User. (http://msdn2.microsoft.com/en-us/vstudio/aa972193.aspx). 
There are plenty of examples there where the work around is "Run Visual Studio 
with elevated administrator permissions" when it doesn't have to be. So its 
clear they know this is an issue.
 
Am I wrong for being disappointed in Microsoft's approach at this stage of the 
game? We aren't talking about an old IDE written for Windows95. This was built 
FOR and ON Vista. With Microsoft's great strides in their SDLC process to date, 
should we be expecting them to lead the charge in educating developers to run 
as Standard Users?  What are your thoughts on this? 
 
---
Regards,
Dana Epp [Microsoft Security MVP]
Blog: http://silverstr.ufies.org/blog/
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Unclassified NSA document on .NET 2.0 Framework Security

2008-11-26 Thread Dana Epp
With all due respect, I think this is where the process of secure coding
fails. I think it stems from poor education, but its compounded by an
arrogant cop out that developers have no power. Your view is not alone. I
hear it a lot. And I think its an easy out.

I agree with you that buy in for designing secure code must come from the
top down. It has to start at the senior management level and work its way
down the line. However, there are certain day-to-day actions that a
developer completely has control over.

With tight deadlines and lack of resources its easy to sacrifice good
principles and practices to get code out. No one denies that. But there are
certain things developers CAN and SHOULD be doing. They should be thinking
more defensively about their code. If you consider the criticality of many
design flaws of today, a lot of it can be controlled by better understanding
how the data traverses the systems, and more importantly how to address it
as it crosses trust boundaries. By understanding this, its easier to see the
entry points that matter most that an adversary may use, and allows the
developer to decide how to deal with the code as it fails. This has very
little to do with management. This is the difference between strategic and
tactical decisions on project development. A developer doesn't need to ask
his boss if he can or should use exception handling correctly. To validate
all untrusted user input. To check return codes when calling functions.
Sadly... in this day and age... most developers don't even do that
correctly. And thats a simple example of this entire problem.

Until we stop blaming others and take responsibility in the code we write,
things won't change. As Gary mentioned... there is an attitude from many
developers that they can abdocate responsibility in what they are doing. Its
hard to get them to even SAY security. Never mind actually making an effort
to do something about it.

I appreciate your opinion that you need to approach and work with the
decision makes. You are absolutely correct. That's a strategic component of
writing secure code. However the tactical approach to actually DO IT fall on
developers. It shouldn't BE a special process. It should be part of their
day-to-day thinking, attitude and function in their field of work.

Guess where this all starts? By educating the developer. Which is why we
squarely need to reflect on them to tactically do it.

-- 
Regards,
Dana Epp
Microsoft Security MVP

On Tue, Nov 25, 2008 at 9:01 AM, Stephen Craig Evans <
[EMAIL PROTECTED]> wrote:

> Gunnar,
>
> Developers have no power. You should be talking to the decision makers.
>
> As an example, to instill the importance of software security, I talk
> to decision makers: project managers, architects, CTOs (admittedly,
> this is a blurred line - lots of folks call themselves architects). If
> I go to talk about software security to developers, I know from
> experience that I am probably wasting my time. Even if they do care,
> they have no effect overall.
>
> Your target and blame is wrong; that's all that I am saying.
>
> Stephen
>
> On Wed, Nov 26, 2008 at 12:48 AM, Gunnar Peterson
>  <[EMAIL PROTECTED]> wrote:
> > Sorry I didn't realize "developers" is an offensive ivory tower in other
> > parts of the world, in my world its a compliment.
> >
> > -gunnar
> >
> > On Nov 25, 2008, at 10:30 AM, Stephen Craig Evans wrote:
> >
> >> HI,
> >>
> >> "maybe the problem with least privilege is that it requires that
> >> developers:..."
> >>
> >> IMHO, your US/UK ivory towers don't exist in other parts of the world.
> >> Developers have no say in what they do. Nor, do they care about
> >> software security and why should they care?
> >>
> >> So, at least, change your nomenclature and not say "developers". It
> >> offends me because you are putting the onus of knowing about software
> >> security on the wrong people.
> >>
> >> Cheers,
> >> Stephen
> >>
> >> On Tue, Nov 25, 2008 at 10:18 PM, Gunnar Peterson
> >> <[EMAIL PROTECTED]> wrote:
> >>>
> >>> maybe the problem with least privilege is that it requires that
> >>> developers:
> >>>
> >>> 1. define the entire universe of subjects and objects
> >>> 2. define all possible access rights
> >>> 3. define all possible relationships
> >>> 4. apply all settings
> >>> 5. figure out how to keep 1-4 in synch all the time
> >>>
> >>> do all of this before you start writing code and oh and there are
> >>> basically no tools that smooth the adoption of the above.
> >

Re: [SC-L] How Can You Tell It Is Written Securely?

2008-11-27 Thread Dana Epp
Code auditing. Untrusted code cannot be deemed safe. If you plan to
outsource your development you must have implicit trust with that
firm, or you need internal assets that have the ability to complete
the audits separately. There is no magic wand here.

But the same risk can be said to exist with inhouse development. We
all have heard of employees writing timebombs or backdoors in their
code. No difference here. You are just transferring the risk.

If you want to trust the code, you need a process in place where you
seperate code development from code review. In this way, you need a
minimum of two members of the dev team that wish to do harm in your
codebase before the risk elevates.

Of course, the auditor better know what the hell he or she is doing.
Otherwise, stuff will still get through.

-- 
Regards,
Dana Epp
Microsoft Security MVP

On Wed, Nov 26, 2008 at 6:03 PM, Mark Rockman <[EMAIL PROTECTED]> wrote:
> OK.  So you decide to outsource your programming assignment to Asia and
> demand that they deliver code that is so locked down that it cannot
> misbehave.  How can you tell that what they deliver is truly locked down?
> Will you wait until it gets hacked?  What simple yet thorough inspection
> process is there that'll do the job?  Doesn't exist, does it?
>
>
> MARK ROCKMAN
> MDRSESCO LLC
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___
>
>
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___