Re: [SC-L] Unclassified NSA document on .NET 2.0 Framework Security

2008-11-25 Thread Shea, Brian A
Security is a tradeoff game between risk and cost in my experience.  So
the least privilege question comes down to practical matters like
knowing the execution environment, knowing the requirements of the tasks
being executed, and knowing where those intersect with the ability of
the user or application security context to provide or request those

If I'm executing with high privilege on only one box vs being run on
1 am I providing least privilege?
If I am running at scale with limited rights covering all use cases vs
with minimum rights and require user prompts or action to gain elevation
when a rare use case requires it am I least privilege?

The answer in my experience has not been a black and white, binary
decision, but rather a trade off where the risk to the environment /
data is evaluated against the cost to provide higher security, or the
cost of added user interaction / inconvenience.

In this way the definition of least privilege is comparable to one a
judge once gave for pornography; I'll know it when I see it.  Put a
different way, least privilege can ONLY be applied when you know all of
the details of a specific instance of a system, and not across all
instances of a system.  Sort of like the Heisenberg Uncertainly
Principle for Computer Security.

I can either know exactly what least privilege is for a specific
system/use case or I can talk about least privilege abstractly about
all systems, but not both with appreciable accuracy.

Re: devs vs CTOs vs Program / project managers
The power is shared across these areas, and sometimes poorly.  The dev
can't be solely responsible since they would need a very complete
requirement set to hold that bag, yet we rarely get those requirements
in the degree of detail we'd need.  However the dev SHOULD also know
about common security issues and techniques, and ensure they avoid them,
even if it means pushing back on some requirements to achieve the
security needed or if they are not specified in the requirements.  The
QA / Testing team should be able to test for requirements and security
issues enough to verify that the target state for functionality and
security are both met to acceptable levels.  Application security is a
team sport.  If we try to pin the responsibility on any one group or
person we leave out aspects that they don't typically control (some
small apps or process might be exceptions, but large apps I'd guess this
to be true).

-Original Message-
[mailto:[EMAIL PROTECTED] On Behalf Of Gunnar Peterson
Sent: Tuesday, November 25, 2008 9:49 AM
To: Stephen Craig Evans
Cc: Secure Mailing List
Subject: Re: [SC-L] Unclassified NSA document on .NET 2.0 Framework

look, i am a consultant. i work in lots of different companies. lots  
of different projects. i don't see these distinctions in black and  
white. sometimes the cto and managers are best positioned to help  
companies develop more secure software, sometimes architects,  
sometimes auditors, and many many times in my experience developers  
are best positioned.

but i really, truly do not care who does it. my only goal is more  
effective security mechanisms and some pragmatic roadmap to get there.  
we are in the infancy of this industry (think automotive safety circa  
1942, all seat belts and brakes), we are in no position to turn away  
help from anyone who can help. every company and every project is  
different, if your organization is set up so that developers are not  
empowered, but managers and CTOs are then by all means work with them.

but actually the main point of my post and the one i would like to  
hear people's thoughts on - is to say that attempting to apply  
principle of least privilege in the real world often leads to drilling  
dry wells. i am not blaming any group in particular i am saying i  
think it is in the too hard pile for now and we as software security  
people should not be advocating for it until or unless we can find  
cost effective ways to implement it.


On Nov 25, 2008, at 11:28 AM, Stephen Craig Evans wrote:

 It's a real cop-out for you guys, as titans in the industry, to go
 after developers. I'm disappointed in both of you. And Gary, you said
 One of the main challenges is that developers have a hard time
 thinking about the principle of least privilege .

 Developers are NEVER asked to think about the principle of least
 privilege. Or your world of software security must be very very very
 different from mine (and I think my world at least equals   yours but
 by about 2 billion people more, which might be irrelevant now but a
 little more relevant in the future :-)

 With the greatest, deepest respect to both of you,

 On Wed, Nov 26, 2008 at 1:01 AM, Stephen Craig Evans

 Developers have no power. You should be talking to the decision  

 As an example, to instill the importance of software security, I talk
 to decision makers: 

Re: [SC-L] Programming language comparison?

2008-02-06 Thread Shea, Brian A
It seems like this exchange is focused on whether bug / flaw classes can
be applied to All programming languages or not.  Isn't the question at
hand which languages have the property Subject to bug / flaw class XXX
(true | false), and not whether you can find one or more class that fits
the All category?

What we need is a coherent dataset showing the languages that have been
assessed, and the classes of bugs or flaws each is subject to.  Then I
could search that dataset to find the listing of all languages that are
/ are not subject to security bug class  when doing assessments or
deciding on my coding language.

-Original Message-
[mailto:[EMAIL PROTECTED] On Behalf Of ljknews
Sent: Tuesday, February 05, 2008 8:37 PM
Subject: Re: [SC-L] Programming language comparison?

At 4:44 PM -0500 2/5/08, Steven M. Christey wrote:
 On Mon, 4 Feb 2008, ljknews wrote:
  (%s to fill up disk or memory, anybody?), so it's marked
  All and it's not in the C-specific view, even though there's a
  concentration of format strings in C/C++.

 It is marked as All ?

 What is the construct in Ada that has such a risk ?
 H, I don't see any, but then again I don't know Ada.  Is there no
 equivalent to format strings in Ada?  No library support for it?

Not that I know of, but if you can specify a Pascal equivalent
I might be able to see what you are aiming at.  Have you evaluated
Pascal for this defect that is present in All languages ?

 Your question actually highlights the point I was trying to make - in
 we don't yet have a way of specifying language families, such as any
 language that directly supports format strings, or any language with
 dynamic evaluation.

Your choice of terminology is yours to make, only within the
bounds of reasonable use of English.  In English there is a
distinct difference between the terms ALL and SOME, between
the terms ALL and MANY and even between the terms ALL and MOST.
Larry Kilgallen
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC
as a free, non-commercial service to the software security community.
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.

Re: [SC-L] Insecure Software Costs US $180B per Year - Application and Perimeter Security News Analysis - Dark Reading

2007-11-30 Thread Shea, Brian A
IMO the path to changing the dynamics for secure coding will reside in
the market, the courts, and the capacity of the software industry to
measure and test itself and to demonstrate the desired properties of
security, quality, and suitability for purpose.  In today's market we do
well in suitability for purpose (aka marketing then testing, pilot, and
purchase) but I believe we do poorly at security and quality.

Rather than try to tax software for being bad, it will likely end up
being more successful to reward them for being good in the form of
market support and sales.  That dynamic will probably work better, and
will empower the companies and individuals (who choose to get this
involved) to make the choices of security and quality against cost or
convenience.  The punishment for bad software is lost sales and eventual
loss of the use of that product within the market.

Software vendors will need a 3 tier approach to software security:  Dev
training and certification, internal source testing, external
independent audit and rating.  The open source version of this can be
the same, but applied more individually or at the derivative product
level (ie if I make a Linux based appliance from open source, I become
the owner of the issues in that Linux derivative)

The legal side will need to alter the EULA away from a hold harmless
model to one where vendors and software buyers can assert that the
software is expected to perform at certain security or quality levels,
and have a known backing from a legal recourse side.  Companies that
make/sell software and can be sued offer more recourse than open source,
but that's not better or worse just different.  The buyer can choose the
degree of security and quality rating (based on the audit etc) and the
legal recourse they want via the SLA or choice to use open source.  For
a high assurance system one might choose a software vendor with a
contract and SLA, while also choosing open source for lower assurance
efforts that come with less recourse but less cost too.  This opens a
market for companies who choose to resell open source, but provide the
support and assurances.  It also allows anyone to choose which factors
are important, and buy / use accordingly.  

If choice results in downstream impacts, then the deployer of the
software is initially accountable, and they must determine if they have
recourse via SLA or contract to their provider.  If they accepted risk
of a non-supported software package, then their deployment and the
ensuing harm is their responsibility.  If they have recourse, as the
saying goes they pass the savings on.

Home users are also empowered if they choose to be, but overall they
gain in two major ways.  The market will be driven to more secure
software over time without their direct knowledge (as companies and
governments choose to require software to be more secure) and they can
benefit from any legal recourses that are available for notable security
failures or quality gaps. 

Using these factors anyone could make decisions based on the need for
recourse (courts), assurance (market), and quality (industry rating and
standards for security and quality) and come away with software that
meets their needs in each area, without excluding open or closed source
or leaving the corporate / consumer customers unprotected.

DISCLAIMER: Views are my own, and not those of my employer, and were
generated over a cup of coffee in a fairly stream of consciousness kind
of way.  Grains of salt not supplied, but are recommended when consuming
the contents. :)

-Original Message-
[mailto:[EMAIL PROTECTED] On Behalf Of Leichter, Jerry
Sent: Friday, November 30, 2007 6:28 AM
To: der Mouse
Subject: Re: [SC-L] Insecure Software Costs US $180B per Year -
Application and Perimeter Security News Analysis - Dark Reading

|  Just as a traditional manufacturer would pay less tax by
|  becoming greener, the software manufacturer would pay less tax
|  for producing cleaner code, [...]
|  One could, I suppose, give rebates based on actual field experience:
|  Look at the number of security problems reported per year over a
|  two-year period and give rebates to sellers who have low rates.
| And all of this completely ignores the $0 software market.  (I'm
| carefully not saying free, since that has too many other meanings,
| some of which have been perverted in recent years to mean just about
| the opposite of what they should.)  Who gets hit with tax when a bug
| is found in, say, the Linux kernel?  Why?
I'll answer this along my understanding of the lines of the proposal at
hand.  I have my doubts about the whole idea, for a number of reasons,
but if we grant that it's appropriate for for-fee software, it's easy
decide what happens with free software - though you won't like the
answer:  The user of the software pays anyway.  The cost is computed in
some other way than as a percentage of the 

Re: [SC-L] Software security video podcast

2007-10-29 Thread Shea, Brian A
IMO (IANAL) this is a position that is increasingly untenable as we move
forward, especially in the consumer markets.  As a customer I do, in
fact, expect software to operate correctly (per features and functions
promised / contracted) but also securely in that is doesn't contain
bugs or insecure data handling that could compromise the app, data, or
my systems.  I agree that a corporation should be wary of the contract /
RFP language and commitments, but I can't and don't expect consumers to.
Frankly even corporations should be able to expect reasonable
performance and quality from their software vendors without being
expected to explicitly ask.

Apparently the UK House of Lords sees the issue as described in their
Fifth Report here:

And Commented on by a participant here:

The third area, and this is where the committee has been most
far-sighted, and therefore in the short term this may well be their most
controversial recommendation, is that they wish to see a software
liability regime, viz: that software companies should become responsible
for their security failures. -Richard Clayton, from the blog linked

If a company produces a product that contain preventable safety issues,
even ones not explicitly requested, would a you let them stay above
If car company built a car that exploded when it was hit would you allow
them to avoid liability because no one asked for that to NOT happen?
If a drug company produced a drug that caused serious health issues or
death when using it, would they be exempt from liability because you
fixed their heart as requested, but no one asked for the liver to stay
healthy in the process?

Most wouldn't and they would cite the reasonable person concept (see: ) as justification for
not including the droves of issues that COULD be listed explicitly but
are implied due to a reasonable person expecting them to be in place.

Again IMO in a Kano model ( ),
software security has moved from Indifference (customer doesn't care if
it is present or not), to currently being a Performance feature (more is
better, but less is acceptable) as part of software today.  It is moving
ever closer to Basic (this feature is a Must Have for a product in the
field) and will likely be making that transition in the next 4-8 years. 

Disclaimer: personal views here, not representative of the company I
work for etc.

-Original Message-
[mailto:[EMAIL PROTECTED] On Behalf Of John Mason Jr
Sent: Saturday, October 27, 2007 10:12 AM
To: Secure Coding
Subject: Re: [SC-L] Software security video podcast

J.M. Seitz wrote:
 Software security can be tricky when it comes to requirements, 
 mostly because customers and consumers don't explicitly demand
 security, rather they impicitly expect it.
 Wait a second here, don't customers also implicitly expect that the
 software is going to run? I mean I haven't seen a requirements
 _ever_ that has said The software must start.. They just implicitly
 expect that its going to do that.
 Doesn't seem like a big surprise that most customers will _expect_
 Hey, I don't want this software pwnable after you're done with it.
 Not sure where the trickiness you are referring to comes from?
 ps. Didn't AW publish your book(s)? :) I would be real surprised
 [turning on Tom Ptaceks snarky bit] if there's any mention of them.

If it isn't in the RFP then it's not a requirement, regardless of what
the customer implicitly expected.

The customers don't see a value to the added cost(s) of a secure system,
unless they have a business requirement to adhere to such as PCI
compliance, or HIPAA.

If a requirement is important to the business it must be explicit, but
this means the folks writing the RFP must have the understanding to make
sure it is in the RFP, otherwise the you could end up with the better
system (more secure) not being selected because it costs more.

Now the company who bids the project in a more secure fashion will also
get a tangible benefit from code review and other processes that make
for a secure system, but they won't invest in this avenue until the RFP
requires it.


Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC
as a free, non-commercial service to the software security community.
Secure Coding mailing list (SC-L)

Re: [SC-L] Perspectives on Code Scanning

2007-06-07 Thread Shea, Brian A
 And answering that correctly requires input from the customer.  Which
we (TINW) won't have until customers recognize a need for security and
get enough clue to know what they want to be secure against.

I can't exactly agree with this as there is a distinction (or should be
IMO) between security features and security of the code.

If you are asserting that the customer must tell you how many security
features to implement based on their requirements. I'll agree 100%.
Stuff like, I don't need 3 types of military grade encryption added,
I'm just doing recipe sorting.  That kind of stuff.

However if you are waiting around for the customer to request software
that isn't subject to buffer overflows or can't be hijacked by input
validation I think you are missing the point.  That level of security
comes out of the quality of the dev team, process, and company producing
the software, not out of customer requirements.  Customer expect this
level of security implicitly just like they expect their toasters won't
burst into flames every time they try to toast a bagel.  They have
learned to accept less by the craptastic quality of code from many
vendors for many years, but would happily revert to the initial
expectation of I just want it to work and not provide additional risk
to my organization.

It remains (weakly) arguable that IF the customer really wanted secure
software they'd have stronger legal agreements with suppliers that allow
recourse and compensation for failed security, but that brings in yet
another of the often technically clueless, Lawyers.

I do believe that the focal point of getting change from where we stand
now is at the feet of the customer, because it starts out as an economic
problem first.  If you pay more to get secure code or pay to buy weak
security but fast to market code, then you somewhat get what you paid
for.  Vendors will produce the lowest quality for the highest price if
the market lets them.

PS - speaking of lawyers... :)  The views expressed here are my own, not
those of my company ... etc etc.

-Original Message-
[mailto:[EMAIL PROTECTED] On Behalf Of der Mouse
Sent: Thursday, June 07, 2007 8:07 AM
Subject: Re: [SC-L] Perspectives on Code Scanning

 --- the software should work and be secure (co-requirements).

And already we have trouble, because this immediately raises not only
the question what does `work' mean? but also secure against what?.

And answering that correctly requires input from the customer.  Which
we (TINW) won't have until customers recognize a need for security and
get enough clue to know what they want to be secure against.

And we all know how likely customers are to have clue (of just about
any sort).

(Actually, there are markets where the customer usually is clued.
Oddly enough, they also tend to be markets wherein software isn't
security Swiss cheese. :-)

/~\ The ASCII   der Mouse
\ / Ribbon Campaign
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC
as a free, non-commercial service to the software security community.
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.

Re: [SC-L] What defines an InfoSec Professional?

2007-03-08 Thread Shea, Brian A
The right answer is both IMO.  You need the thinkers, integrators, and
operators to do it right.  The term Security Professional at its basic
level simply denotes someone who works to make things secure.

You can't be secure with only application security any more than you can
be secure with only firewalls or NIDs.  The entire ecosystem and
lifecycle must be risk managed and that is accomplished by security
professionals.  Each professional may have a specialty due to the
breadth of topics covered by Security (let's not forget our Physical
Security either), but all would be expected to act as professionals.
Professionals in this definition being people who are certified and
expected to operate within specified standards of quality and behavior
much like CISSP, CPA, MD, etc.

-Original Message-
[mailto:[EMAIL PROTECTED] On Behalf Of Gunnar Peterson
Sent: Thursday, March 08, 2007 9:13 AM
Subject: Re: [SC-L] What defines an InfoSec Professional?

actually just the former. Robert Garigue characterized firewalls, nids,
et al as good network hygiene. The equivalent of a dentist telling you
to brush your teeth. An infosec pro needs much more depth than that. The
model is charlemagne

-Original Message-
From: McGovern, James F (HTSC, IT) [EMAIL PROTECTED]
Date: Thursday, Mar 8, 2007 10:27 am
Subject: [SC-L] What defines an InfoSec Professional?

If you have two individuals, one of which has been practicing secure
 practices and encouraging others to do so for years while another
individu= al was involved with firewalls, intrusion detection,
information security p= olicies and so on, are they both information
security professionals or just=
 the later?

* This communication, including attachments, is
for the exclusive use of addressee and may contain proprietary,
confidential and/or privileged information.  If you are not the intended
recipient, any use, copying, disclosure, dissemination or distribution
is strictly prohibited.  If you are not the intended recipient, please
notify the sender immediately by return e-mail, delete this
communication and destroy all copies.


Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC
as a free, non-commercial service to the software security community.
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.

RE: [SC-L] How do we improve s/w developer awareness?

2004-12-02 Thread Shea, Brian A
FYI this is part of a notice that went out to financial institutions

Complete Financial Institution Letter: 

Management is responsible for ensuring that commercial off-the-shelf
(COTS) software packages and vendor-supplied in-house computer system
solutions comply with all applicable laws and regulations.
The guidance contained in this financial institution letter will assist
management in developing an effective computer software evaluation
program to accomplish this objective. 

An effective computer software evaluation program will mitigate many of
the risks - including failure to be regulatory compliant - that are
associated with software products throughout their life cycle. 

Management should use due diligence in assessing the quality and
functionality of COTS software packages and vendor-supplied in-house
computer system solutions.

FDIC-Supervised Banks (Commercial and Savings) 

-Original Message-
On Behalf Of Greenarrow 1
Sent: Monday, November 29, 2004 6:08 PM
To: George Capehart
Subject: Re: [SC-L] How do we improve s/w developer awareness?

Words could not be spoken better.  This is my argument from the get go.
to, am tired of seeing everyone blame it on the Dev department when the
orders from above are I want this now and fast.  Maybe, we can focus and
convince upper level management that security is as important as the
money, bells, whistles.  But while I support Dev I still do not
how some companies development departments can include tight secured
in a short time frame and others seem to provide excuses or just do not

Customers are now looking at the security of programs.  Slowly,
are finally looking at security flaws mainly because of the media
that computer softwares are creating.  While it may only be top
in the software field customers are now questioning other programs they
which I support fully.  I can see this in the rise of subscribers to
Security Flaw Alerts which has risen over 71% within the last 3 months.

Just a word of warning as consumers become more aware of security in the
softwares they purchase companies that do not secure will start showing
downslide in purchases.  It is happening to one major company as we
each other on issues.


- Original Message -
From: George Capehart [EMAIL PROTECTED]
Sent: Sunday, November 28, 2004 5:18 PM
Subject: Re: [SC-L] How do we improve s/w developer awareness?

 On Thursday 11 November 2004 10:26, Kenneth R. van Wyk allegedly
  In my business travels, I spend quite a bit of time talking with
  Software Developers as well as IT Security folks.  One significant
  different that I've found is that the IT Security folks, by and
  large, tend to pay a lot of attention to software vulnerability and
  attack information while most of the Dev folks that I talk to are
  blissfully unaware of the likes of Full-Disclosure, Bugtraq, PHRACK,
  etc.  I haven't collected any real stats, but it seems to me to be
  least a 90/10% and 10/90% difference.  (Yes, I know that this is a
  gross generalization and there are no doubt significant exceptions,
  I believe that this presents a significant hurdle to getting Dev
  folks to care about Software Security issues.  Books like Gary
  McGraw's Exploiting Software do a great job at explaining how
  software can be broken, which is a great first step, but it's only a
  first step.

 Apologies for the two-week latency in this reply.  I don't have as
 time for the lists as I used to.

 I have read the rest of this thread, and I didn't see any comments
 address a dimension that is, for me, the most salient.  I feel like a
 broken record because this topic crops up on one security-related list
 or another at least once a quarter and I end up saying the same thing
 every time.  I'm going to say it again, though, because I really
 believe that it is important . . . Dev folks will care about security
 when their managers care about security.  If time-to-market and bells
 and whistles are more important to management than security is,
 that's where dev folks will spend their time.  It is their job to do
 what their managers tell them to do.  When management decides that
 is more important to deliver a product that is based on a robust
 security architecture and which is built and tested with security in
 mind, it will be.  Until then, it won't.  At one time or another in my
 career, I have held just about every position in the software
 development food chain.  I have had the president of the company tell
 me:  I don't care what it takes, you /*will*/ have this project done
 and delivered in four months!  Well, we delivered a