Re: [SC-L] Insecure Software Costs US $180B per Year - Application and Perimeter Security News Analysis - Dark Reading

2007-11-30 Thread Leichter, Jerry
|  Just as a traditional manufacturer would pay less tax by
|  becoming greener, the software manufacturer would pay less tax
|  for producing cleaner code, [...]
|  One could, I suppose, give rebates based on actual field experience:
|  Look at the number of security problems reported per year over a
|  two-year period and give rebates to sellers who have low rates.
| And all of this completely ignores the $0 software market.  (I'm
| carefully not saying free, since that has too many other meanings,
| some of which have been perverted in recent years to mean just about
| the opposite of what they should.)  Who gets hit with tax when a bug
| is found in, say, the Linux kernel?  Why?
I'll answer this along my understanding of the lines of the proposal at
hand.  I have my doubts about the whole idea, for a number of reasons,
but if we grant that it's appropriate for for-fee software, it's easy
decide what happens with free software - though you won't like the
answer:  The user of the software pays anyway.  The cost is computed in
some other way than as a percentage of the price - it's no clear exactly
how.  Most likely, it would be the same tax as is paid by competing
non-free software with a similar security record.  (What you do when
there is no such software to compare against is an interesting problem
for some economist to work on.)

The argument the author is making is that security problems impose costs
on *everyone*, not just on the party running the software.  This is a
classic economic problem:  If a party can externalize its costs - i.e.,
dump them on other parties - its incentives become skewed.  Right now,
the costs of security problems for most vendors are externalized.
Where do they do?  We usually think of them as born by that vendor's
customers.  To the degree that's so, the customers will have an
incentive to push costs back on to the vendor, and eventually market
mechanisms will clean things up.  To some degree, that's happened to
Microsoft:  However effective or ineffective their security efforts,
it's impossible to deny that they are pouring large sums of money
into the problem.

To the degree that the vendors' customers can further externalize the
support onto the general public, however, they have no incentive to
push back either, and the market fails.  This is pretty much the case
for personal, as opposed to corporate, users of Microsoft's software.
Imposing a tax is the classic economic answer to such a market failure.
The tax's purpose is (theoretically) to transfer the externalized costs
back to those who are in a position to respond.  In theory, the cost
for security problems - real or simply possible; we have to go with
the latter because by the time we know about the former it's very
late in the game - should be born by those who develop the buggy
code, and by those who choose to use it.  A tax on the code itself
directly takes from the users of the code, indirectly from the
vendors because they will find it more difficult to compete with
vendors who pay lower tax rates, having written better code.  It's
much harder to impose the costs directly on the vendors.  (One way
is to require them to carry insurance - something we do with, say,
trucking companies).

In any case, these arguments apply to free software in exactly the same
way they do for for-fee software.  If I cars away for free, should I be
absolved of any of the costs involved if they pollute, or cause
accidents?  If I'm absolved, should the recipients of those cars also be
absolved?  If you decide the answer is yes, what you've just decided
is that *everyone* should pay a hidden tax to cover those costs.  In
what why is *that* fair?
-- Jerry
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.

Re: [SC-L] Insecure Software Costs US $180B per Year - Application and Perimeter Security News Analysis - Dark Reading

2007-11-29 Thread Leichter, Jerry
| FYI, there's a provocative article over on Dark Reading today.
| The article quotes David Rice, who has a book out called
| Geekconomics: The Real Cost of Insecure Software.  In it, he tried
| to quantify how much insecure software costs the public and, more
| controversially, proposes a vulnerability tax on software
| developers.  He believes such a tax would result in more secure
| software.
| IMHO, if all developers paid the tax, then I can't see it resulting in
| anything other than more expensive software...  Perhaps I'm just
| missing something, though.
The answer to this is right in the article:

Just as a traditional manufacturer would pay less
tax by becoming greener, the software manufacturer
would pay less tax for producing cleaner code, he
says. Those software manufacturers would pay less
tax pass on less expense to the consumer, just as a
regular manufacturing company would pass on less
carbon tax to their customers, he says.

He does go on to say:  

It's not clear how the software quality would be
measured ... but the idea would be for a software
maker to get tax breaks for writing code with fewer
security vulnerabilities.

And the consumer ideally would pay less for more
secure software because tax penalties wouldn't get
passed on, he says.

Rice says this taxation model is just one of many
possible solutions, and would likely work in concert
with torte law or tighter governmental regulations

So he's not completely naive, though the history of security metrics and
standards - which tend to produce code that satisfies the standards
without being any more secure - should certainly give on pause.

One could, I suppose, give rebates based on actual field experience:
Look at the number of security problems reported per year over a two-
year period and give rebates to sellers who have low rates.  There are
many problems with this, of course - not the least that it puts new
developers in a tough position, since they effectively have to lend
the money for the tax for a couple of years in the hopes that they'll
get rebates later when their code is proven to be good.

-- Jerry
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.

Re: [SC-L] OWASP Publicity

2007-11-16 Thread Leichter, Jerry
| ...I've never understood why it is that managers who would never dream
| of second-guessing an electrician about electrical wiring, a
| construction engineer about wall bracing, a mechanic about car
| repairs, will not hesitate to believe - or at least act as though they
| believe - they know better than their in-house experts when it comes
| to what computer, especially software, decisions are appropriate, and
| use their management position to dictate choices based on their
| inexpert, incompletely informed, and often totally incompetent
| opinions.  (Not just security decisions, either, though that's one of
| the cases with the most unfortunate consequences.)
This is perhaps the most significant advantage to licensing and other
forms of official recognition of competence.  At least in theory, a
licensed professional is bound by an officially-sanctioned code of
conduct to which he has to answer, regardless of his employment chain
of command.

In reality, of course, things are not nearly so simple, along many
dimensions.  Theory and practice are often very different.  However ...
the next time you run into a situation where you are forced into
a technically bad decision because some salesman took a VP to a nice
golf course - imagine that you could pull down some official regulation
that supported your argument.  The world has many shades of gray

-- Jerry
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.

Re: [SC-L] DH exchange: conspiracy or ignorance?

2007-09-19 Thread Leichter, Jerry
| Yes, this is certainly bad and a very interesting finding.  These
| checks should clearly be present.  Are there serious practical
| ramifications of this problem though?  In other words, how likely is
| it that the generated public key in the DH key exchange will actually
| be 0 or 1?  It can certainly happen, but our passive attacker would
| have to be passive for a very long time and there is no guarantee that
| the secret key they might eventually get will be of interest to them
| (since the attacker cannot control when a weak public key is
| produced).  Just a thought.
What's special about an computed local value of 1 is that anyone can
easily compute the log of 1:  It's 0.  (Note that a public key value
of 0 is impossible - 0 isn't in the group.  The same goes for any
value greater than p-1.  Checking for these isn't so much checking
for security as checking for the sanity of the sender - if he sends
such a value, he's buggy and shouldn't be trusted!)

In typical implementations of DH, both the group and the generator are
assumed to be public.  In that case, anyone can generate a table of
x, g^x pairs for as many x's as they resources to cover.  Given such a
table, a passive attacker can find log of the secret whenever the
secret happens to be in the table.

Of course, the group is chosen large enough that any conceivable table
will only cover a tiny proportion of the possible values, so in practical
terms this attack is uninteresting.

The fact that two entries in the table (for x=0 and x=p-2) can be
computed in your head (well, you might need a pencil and paper for the
second) doesn't make the table any more of an viable attack mechanism.
So the passive observer attack doesn't make much sense to me.

Is there some other attack specific to these values that I'm missing?

BTW, the paper suggest a second test, (K_a)^g = 1 (mod p).  This test
makes sense if you're working over a subgroup of Z* mod p (as is often,
but not always, done).  If you're working over the full group, any
K_a between 1 and p-1 is legal, so this can only test the common
parameter g, which is fixed.  That hardly seems worth doing - if the
public parameters are bad, you're completely screwed anyway.

-- Jerry

| Evgeny
| -
| Evgeny Lebanidze
| Senior Security Consultant, Cigital
| 703-585-5047,
| Software Confidence.  Achieved.
| -Original Message-
| From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kowsik
| Sent: Wednesday, September 19, 2007 1:24 AM
| To:
| Subject: [SC-L] DH exchange: conspiracy or ignorance?
| K.
| ps: I work for Mu.
| ___
| Secure Coding mailing list (SC-L)
| List information, subscriptions, etc -
| List charter available at -
| SC-L is hosted and moderated by KRvW Associates, LLC (
| as a free, non-commercial service to the software security community.
| ___
| ___
| Secure Coding mailing list (SC-L)
| List information, subscriptions, etc -
| List charter available at -
| SC-L is hosted and moderated by KRvW Associates, LLC (
| as a free, non-commercial service to the software security community.
| ___
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.

Re: [SC-L] Really dumb questions?

2007-08-30 Thread Leichter, Jerry
| Most recently, we have met with a variety of vendors including but not
| limited to: Coverity, Ounce Labs, Fortify, Klocwork, HP and so on. In
| the conversation they all used interesting phrases to describe they
| classify their competitors value proposition. At some level, this has
| managed to confuse me and I would love if someone could provide a way to
| think about this in a more boolean way.
| - So when a vendor says that they are focused on quality and not
| security, and vice versa what exactly does this mean? I don't have a
| great mental model of something that is a security concern that isn't a
| predictor of quality. Likewise, in terms of quality, other than
| producing metrics on things such as depth of inheritance, cyclomatic
| complexity, etc wouldn't bad numbers here at least be a predictor of a
| bad design and therefore warrant deeper inspection from a security
| perspective?
What you're seeing here is an attempt to control the set of measure-
ments that will be applied to your product.  If you say you have a
product that tests security, on the one hand, people will ask you
whether out-of-the-box you detect some collection of known security
bugs.  That collection might be good, or it might contain all kinds
of random junk gathered from 20 years of Internet reports.  As a
vendor, I don't (quite reasonably) want to be measured on some
unpredictable, uncontrollable set of hard (easily quantifiable)

Beyond that, saying you provide security gets you right in to the
maelstrom of security standards, certification, government and
accounting requirements, etc.  Should there be a security problem
with customer code that your product accepted, if you said you
provided security testing, expect to be pulled in to any bad
publicity, lawsuits, etc.

It's so much safer to say you help ensure quality.  That's virtually
impossible to quantify or test, and you clearly can only be one
part of a much larger process.  If things go wrong, it's very hard
for the blame to fall on your product (assuming it's any good at
all).  After all, if you pointed out 100 issues, the fixing of which
clearly improved quality, well, someone else must have screwed up in
letting one other issue through that had a quality impact - quality
is a result of the whole process, not any one element of it.  You
can't test quality in.  (This sounds like excuses, but it happens
also to be mainly true!)
| - Even if the rationale is more about people focus rather than tool
| capability, is there anything else that would prevent one tool from
| serving both purposes?
Well, I can speak to two products whose presentations I saw about a
year ago.  Fortify concentrated on security.  They made the point
that they had hundreds of pre-programmed tests specific to known
security vulnerabilities.  The weaknesses in their more general
analysis - e.g, they had almost no ability to do value flow
analysis - they could dismiss as not relevant to their specific
mission of finding security flaws.

Coverity, on the other hand, did very deep general analysis, but
for the most part found security flaws as a side-effect of general
fault-finding mechanisms.  Fortify defenders - including some on
this list - pointed out this limitation, saying that the greatest
value of Fortify came from the extensive work that had gone into
analyzing actual security holes in real code and making sure they
would be caught.

In practice, while there was an overlap between what the two
products found, neither subsumed the other.  If you can afford
it, I'd use both.  (In theory, you could produce a security hole
library for Coverity that copied the one in Fortify.  But no one
is doing that.)

| - Is it reasonable to expect that all of the vendors in this space will
| have the ability to support COBOL, Ruby and Smalltalk sometime next year
| so that customers don't have to specifically request it?
Well, the effort depends on the nature of the product.  Products that
mainly scan for particular patterns are probably easier to adjust to
work with other languages than products that do deeper analysis.  On
the other hand, you have to build up a library of patterns to scan
for, which to some degree will be language-specific.

Products that depend on deep analysis of data flow and such usually
don't care much about surface syntax - a new front end isn't a
big deal - but will run into fundamental problems if they rely on
static properties of programs, like static types.  Languages like
Smalltalk and even more Ruby are likely to be very problematic for
this kind of analysis, since so much is subject to change at run

| - Do the underlying compilers need to do something different since
| languages such as COBOL aren't object-oriented which would make analysis
| a bit different?
I doubt object orientation matters all that much.  In general, anything
that provides (a) static information (b) encapsulation makes analysis
easier.  COBOL is old enough to have elements that 

Re: [SC-L] Interesting tidbit in iDefense Security Advisory 06.26.07

2007-06-28 Thread Leichter, Jerry
On Thu, 28 Jun 2007, J. M. Seitz wrote:
| Hey there,
|  If you couldn't insert ignore directives, many people 
|  wouldn't use such tools at all, and would release code with 
|  vulnerabilities that WOULD be found by such tools.
| Of course, much like an IDS, you have to find the baseline and adjust
| your ruleset according to the norm, if it is constantly firing on
| someone accessing /index.html of your website, then that's working
| against you.
| I am not disagreeing with the fact the static source analysis is a
| good thing, I am just saying that this is a case where it failed (or
| maybe the user/developer of it failed or misunderstood it's use). Fair
| enough that on this particular list you are going to defend source
| analysis over any other method, it is about secure coding after all,
| but I definitely still strongly disagree that other methods wouldn't
| have found this bug.
| Shall we take a look at the customer lists of the big source analyzer
| companies, and then cross-map that to the number of vulnerabilities
| released? 
It would be great to have that information.  Even better would be to
classify the vulnerabilities in two buckets:  Those that the analyzer
was expected to find, and those that it's not ready for.

You would have to allow for the time it takes from when the analyzer
starts being used to when software that actually went through, not just
the analyzer, but the remediation process, actually hits the streets.
For large companies and widely-used commerical products, that can be a
long time.

However ... I think the chances of getting that kind for commercial
projects is just about nil.  Companies generally consider the details of
their software development processes proprietary, and at most will give
you broad generalities about the kinds of tools and processes they use.
(Given the cost of these analyzers, you'd think that potential customers
would want some data about actual payoffs.  However, I think they
recognize that no such data exists at this point.  A prudent customer
might well collect such data for himself to help in deciding whether the
contract for the analyzer is worth renewing - though even this kind of
careful analysis is very rare in the industry.)

An OSS project might be a better starting place for this kind of study.
I know that Coverity has analyzed a couple of significant pieces of OSS
for free (including, I believe, the Linux and NetBSD kernels).  It's
likely that other analyzer vendors have done something similar, though I
haven't heard about it.  A study showing how using an analyzer led to an
x% decrease in reported bugs would make for great sales copy.  (Of
course, there's always the risk that the analyzer doesn't actually help
make the software more secure!)

|   Why are we still finding bugs in software that have the SDL?
| Why are we still finding bugs in software that have been analyzed
| before the compiler has run? Why are these companies like Fortify
| charging an arm and a leg for such a technology when the bughunters
| are still beating the snot out of this stuff? 
I'm not sure that's a fair comparison.  The defenders have to plug *all*
the holes, including those using techniques that weren't known at the
time the software was produced.  The attackers only have to find one

|   You guys all have much
| more experience on that end, so I am looking forward to your
| responses!
As with almost everything in software engineering, sad to say, there is
very little objective evidence.  It's hard and expensive to gather, and
those who are in a position to pay for the effort rarely see a major
payoff from making it.  Which would you rather do:  Put up $1,000,000 to
do a study which *might* show your software/methodology/whatever helps,
or pay $1,000,000 to hire a bunch of feet on the street to sell more
copies?  So we have to go by gut feel and engineering judgement.  Those
certainly say, for most people, that static analyzers will help.  Then
again, I'm sure most people on this list will argue that strong static
typing is essential for secure, reliable software.  Everyone has known
that for 20 years or more.  Except that ... if you look around, you'll
find many arguments these days that the run-time typing characteristic
of languages like Python or Ruby is just as good and lets you produce
software much faster.  Which argument is true?  You'd think that after
50 years of software development, we'd at least know how to frame a
test ... but even that seems beyond the state of the art.

-- Jerry

| Cheers! 
| JS
| ___
| Secure Coding mailing list (SC-L)
| List information, subscriptions, etc -
| List charter available at -
| SC-L is hosted and moderated by KRvW Associates, LLC 

Re: [SC-L] What's the next tech problem to be solved in software security?

2007-06-08 Thread Leichter, Jerry
On Thu, 7 Jun 2007, Steven M. Christey wrote:
| On Wed, 6 Jun 2007, Wietse Venema wrote:
|  more and more people, with less and less experience, will be
|  programming computer systems.
|  The challenge is to provide environments that allow less experienced
|  people to program computer systems without introducing gaping
|  holes or other unexpected behavior.
| I completely agree with this.  This is a grand challenge for software
| security, so maybe it's not the NEXT problem.  There's a lot of
| tentative work in this area - safe strings in C, SafeInt,
| StackGuard/FormatGuard/etc., non-executable data segments, security
| patterns, and so on.  But these are bolt-on methods on top of the
| same old languages or technologies, and some of these require
| developer awareness.  I know there's been some work in secure
| languages but I'm not up-to-date on it.
| More modern languages advertise security but aren't necessarily
| catch-alls.  I remember one developer telling me how his application
| used Ruby on Rails, so he was confident he was secure, but it didn't
| stop his app from having an obvious XSS in core functionality.
|  An example is the popular PHP language. Writing code is comparatively
|  easy, but writing secure code is comparatively hard. I'm working on
|  the second part, but I don't expect miracles.
| PHP is an excellent example, because it's clearly lowered the bar for
| programming and has many features that are outright dangerous, where
| it's understandable how the careless/clueless programmer could have
| introduced the issue.  Web programming in general, come to think of
| it.
I think this all misses the essential point.

Safe strings, stack guards, non-executable data segments - these are all
solutions to yesterday's problems.  Yes, they are still important; yes,
the solutions aren't yet complete.  But the emerging problems are
exemplified by your comment about XSS.

The real issue that we still have not internalized is that the field of
operations has changed dramatically.  We still think of a program as
something that runs on some box somewhere.  The programming model is
the hardware and software inside that box.  Security is about making
sure that that box does what it's supposed to do - no more and no less.
Everything that crosses the boundary of that box is just I/O.

But increasingly that boundary is dissolving.  Yes, much of the Web 2.0
rhetoric is overblown, but much of what it's selling you - the
applications that live in the network, the storage that lives in the
network, etc. - is already here to one degree or another.  An XSS
attack cannot even be described within the confines of a single box.
It's an attack against a distributed program running on a distribute
machine consisting of at least three different boxes, each doing
exactly what it was designed to do.

Nothing we do to the hardware in the individual boxes only will make the
slightest difference at this higher level of abstraction.  Nothing we do
in programming languages that only deal with a single box will help.  We
don't today even have any formalisms for describing these distributed
programs - much less safe ways for building them.

So I would say that the grand challenge today is to move on.  Start
thinking about how to secure the global network - not the wires, not the
individual boxes, not the API's, but the emergent properties of the
whole thing.  This will require very different thinking.  Some things
are clear:  We gained safety within the individual boxes only by giving
up some freedom.  Self-modifying code?  No thanks.  Unstructured control
flow?  We can do better.  Everything is just bits?  No, everything has a
type.  And so on.

Meanwhile, on the network side, what do we have?  Untyped byte streams;
mobile code; anything-goes paste-ups; no effective, enforced distinction
between between code and data; glorification of any hacky means at all
that gets something out there *yesterday*.

The techniques we are applying with increasing success inside the box
today - hardware enforcement of executability constraints and stack
overflows; type-safe, memory-safe languages; static analysis; and so on
- are ideas that go back 30 years.  What's making them practical today
is (a) years of refinement on the basic ideas; (b) much faster hardware.
I can't recall any really new idea in this area in a *long* time.

When it comes to these new problems, we're not much further along than
we were in the early 1960's for traditional programming.  We don't have
any real models for the computation process, so no starting point for
defining safe subsets.  We don't even know what safe means for a
global computation:  At least for a program I've run on my box, I can in
principle write down what I expect it to do and not do.  For a program
using resources on my box and a bunch of other boxes, many belonging to
parties I know nothing about and who generally don't know each other
either - who even in principle can 

Re: [SC-L] temporary directories

2007-01-02 Thread Leichter, Jerry
[MJoderator:  This is likely beyond the point of general interest to sc-l]

On Fri, 29 Dec 2006, ljknews wrote:

| Date: Fri, 29 Dec 2006 20:49:01 -0500
| From: ljknews [EMAIL PROTECTED]
| To:
| Subject: Re: [SC-L] temporary directories
| At 6:56 PM -0500 12/29/06, Leichter, Jerry wrote:
|  | Not on Unix, but I tend to use temporary names based on the Process ID
|  | that is executing.  And of course file protection prevents malevolent
|  | access.
|  |
|  | But for a temporary file, I will specify a file that is not in any
|  | directory.  I presume there is such a capability in Unix.
|  You presume incorrectly.  You're talking about VMS, where you can
|  open a file by file id. 
| Actually, I was talking about using the FAB$V_TMD bit when creating
| the file.
The way one would get the effect of TMD on Unix is to create the file
normally and, while keeping a descriptor open on it, delete it.  The
file will then live and be completely usable to this process or to
any other process that either (a) already has it open (legitimately
or because they snuck in on the race condition); (b) receives the open
dexriptor by inheritance after a fork(); (c) receives the open descriptor
by an explicit pass through a Unix-mode socket (a relatively little
used facility).  However, no one would be able to find the file
through any file system entry, and no user-land code could get to
it through its inode number even if it got its hands on that number.

|  One can argue this both ways, but on the specific matter of safe
|  access to temporary files, VMS code that uses FID access is much
|  easier to get right than Unix code that inherently has to walk
|  through directory trees.  On the other hand, access by file id
|  isn't - or wasn't; it's been years since I used VMS - supported
|  directly by higher-level languages (though I vaguely recall that
|  C had a mechanism for doing it).
| In Ada invoking packagename.CREATE with a null name will do the
| trick, although if your VMS experience ended before 1983 you would
| not have run into that.  But how to program easily against VMS V4.1
| when the latest version is VMS V8.3 is not a typical problem.
I think the last VMS version I actively used was 5.4 or so.

| I gather you are saying that the innards of Unix will force creation
| of an unwanted directory entry on the Ada implementation of the required
| null name support for packagename.CREATE .  The Ada implementation
| could rely on exclusive access to the file (surely Unix has that, right?)
Not typically, no.  (There are extensions, but standard Unix has only
advisory locking - i.e., locking enforced between processes that
choose to make locking calls.)

| coupled with whatever Unix has that passes for the FAB$V_DLT bit to
| delete the file on Close (such as at insert Unix words for image rundown).
There's no direct analogue.  Unix inode's are reference-counted - a
directory entry is a reference.  There's no explicit way to delete a
file - all you can do is get rid of all the references you can.  If
that gets the ref count down to 0, the file will disappear eventually.
(There's a separate count of open file descriptors to implement that
sticks around while open semantics.)

| But these are problems that have been solved by those who provided the
| Ada implementation (ACT and Aonix come to mind for Unix), and thus are
| not an issue for the high level language programmer.
Presumably they do the create-the-file-and-immediately-delete-it trick.
Since the file must, however briefly, have an entry in some directory.
General purpose code can't make assuptions about what directories
are available for writing, so pretty much has to put the entry in
a known, public place - almost always /tmp or /var/tmp.  Unless one
does this very carefully, it's open to various attacks.  (For one
trivial example, there is no way to tell the open() call to *always*
create a new file - you can only tell it if the file already exists,
don't open it, return an error instead.  The code had better check
for that error and do something appropriate or it can be fooled into
using a file an attacker created and already has access to.)

The techniques for doing this are complex enough - and the attacks
if you don't do it *exactly* right obscure enough - that after all
these years, attacks based on insecure temporary file creation
are still reported regularly.  (Frankly, even though I know that
these problems exist, if you were to ask me to write a secure
temporary file creator right now, I wouldn't try - I'd look for
some existing code, because I doubt I'd get it right.)

-- Jerry

| -- 
| Larry Kilgallen
| ___
| Secure Coding mailing list (SC-L)
| List information, subscriptions, etc -
| List charter available at -

Re: [SC-L] Compilers

2007-01-02 Thread Leichter, Jerry
| ...P.S.  Please watch for the unfortunate word wrap in the URL of my
| original post.  The broken link still works but goes to thw wrong place!
Now, *there's* an interesting hazard!  One can imagine some interesting
scenarios where this could be more than unfortunate.  At the least,
it could be (yet another) way to hide the true target of a link.

-- Jerry

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.

Re: [SC-L] Compilers

2006-12-29 Thread Leichter, Jerry
| I _strongly_ encourage development with maximal warnings turned on.
| However, this does have some side-effects because many compilers
| give excessive spurious warnings.  It's especially difficult to
| do with pre-existing code (the effort can be herculean).
Agreed.  Writing for maximum freedom from warnings is a learned skill,
and a discipline.  Mainly it involves avoiding certain kinds of
constructs that, when all is said and done, are usually as confusing
to human readers as they are to compilers.  There is a great deal of
overlap among writing for no warnings, writing for maximum
portability, writing for clarity, and writing for provability.
What they all come down to is:  The code sticks to the meaning of
the language definition and avoids all ambiguity; it has only one
possible interpretation, and coming to that interpretation requires
minimum work.

That said, there will always be cases where maximum speed, minimum
size, or some other external constraint drive you to do things that
don't meet these constraints.  Some of these are reasonable.  Bubble
sort is obviously correct; no O(N log N) sort is.  There are
places where you have to use comments to refer to a proof and the
kind of checking required becomes very different.  And there are
places where every nanosecond and every byte really matters.  The
conceit of all too many programmers is that *their* code is
*obviously* one of those cases.  It almost certainly isn't.
| An interesting discussion about warning problems in the Linux kernel
| can be found here:
There's example given there of:

bool flag = false;
some_type* pointer;

if (some_condition_is_true()) {
   flag = true;
   pointer = expensive_allocation_function();
if (flag) {

The compiler will warn that the call to use_the_fine() might be
called with an uninitialized pointer.  Noticing the tie between
flag being true and pointer being initialized is beyond gcc,
and probably beyond any compiler other than some research tools.

Then again, it's not so easy for a human either beyond a trivial
example like this!

There's an obvious way to change this code that is simultaneously
warning-free, clearer - it says exactly what it means - and smaller
and equally fast on all modern architectures:  Get rid of flag,
initialize pointer to NULL, then change the test of flag to test
whether pointer is non-NULL.  (Granted, this is reading into the
semantics of expensive_allocation_function().)  Someone mentions
this in one response, but none of the other respondents pick up
on the idea and the discussion instead goes off into very different

There's also no discussion of the actual cost in the generated code
of, say, initializing pointer to NULL.  After all, it's certainly
going to be in a register; clearing a register will be cheap.  And
the compiler might not even be smart enough to avoid a pointless
load from uninitialized memory of pointer is *not* given an
initial value.

(There is one nice idea in the discussion:  Having the compiler
tell you that some variable *could* have been declared const,
but wasn't.)

I find this kind of sideways discussion all too common when you
start talking about eliminating warnings.

| Ideally compiler writers should treat spurious warnings as serious bugs,
| or people will quickly learn to ignore all warnings.
| The challenge is that it can be difficult to determine what is
| spurious without also making the warning not report what it SHOULD
| report.  It's a classic false positive vs. false negative problem
| for all static tools, made especially hard in languages where
| there isn't a lot of information to work with.
Having assertions actually part of the language is a big help here.
This is all too rare.
-- Jerry
| --- David A. Wheeler
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.

Re: [SC-L] re-writing college books - erm.. ahm...

2006-11-05 Thread Leichter, Jerry
Much as I agree with many of the sentiments expressed in this discussion,
there's a certain air of unreality to it.  While software has it's own
set of problems, it's not the first engineered artifact with security
implications in the history of the world.  Bridges and buildings
regularly collapsed.  (In the Egyptian desert, you can still see a
pyramid that was built too aggressively - every designer wanted to
build higher and steeper than his predecessor - and collapsed before
it was finished.)  Steam boilers exploded.  Steel linings on wooden
railroad tracks stripped of, flew through the floors of passing cars,
and killed people.  Electrical systems regularly caused fires.

How do we keep such traditional artifacts safe?  It's not by writing
introductory texts with details of safety margins, how to analyze the
strength of materials, how to include a crowbar in a power supply.
What you *may* get in an introductory course is the notion that there
are standards, that when it comes time for you to actually design
stuff you'll have to know and follow them, and that if you don't you're
likely to end up at best unemployed and possibly in jail when your
creativity kills someone.

In software, we have only the beginnings of such standards.  We
teach and encourage an attitude in which every last bit of the
software is a valid place to exercise your creativity, for better
or (for most people, most of the time) worse.  With no established
standards, we have no way to push back on managers and marketing
guys and such who insist that something must be shipped by the
end of the week, handle 100 clients at a time, have no more tha
1 second response time, and run on some old 486 with 2 MB of memory.

I don't want to get into the morass of licensing.  It's a fact that
licensing is heavily intertwined with standard-setting in many
older fields, but not in all of them, and there's no obvious inherent
reason why it has to be.

The efforts to write down best practices at CERT are very important,
but also very preliminary.  As it stands, what we have to offer
are analogous to best practices for using saws and hammers and
such - not best practices for determining floor loadings, appropriate
beam strengths, safe fire evacuation routes.

Every little bit helps, but a look at history shows us just how
little we really have to offer as yet.
-- Jerry

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] Apple Places Encrypted Binaries in Mac OS X

2006-11-03 Thread Leichter, Jerry
| Here's a somewhat interesting link to an eweek article that discusses
| Apple's use of encryption to protect some of its OS X binaries:
| Of course, encrypting binaries isn't anything new, but it's
| interesting (IMHO) to see how it's being used in a real OS.  The
| article cites speculation as to whether Apple uses encryption for
| anti-piracy or anti-reverse-engineering.
Actually, it's pretty clear why they are doing it, if you look at
the pieces they encrypt.  The Finder and Dock have no particularly
valuable intellectual property in them, but they are fundamental to
the GUI.  Encrypting them means that a version of OS X that's been
modified to boot on non-Apple hardware won't have a GUI, thus
limiting its attractiveness to non-hackers.  To really get the
result to be widely used, someone would have to write a replacement
for these components that looked enough like the original.  And
of course, since they built a general-purpose mechanism, nothing
prevents Apple encrypting other components later.

Rosetta (the binary translator for PowerPC programs) isn't an essential
program.  Apple may simply consider it valuable, but I think it's more
likely that they may be preparing the way for the next step:  Encrypting
applications they deliver as native.  Since the encryption isn't
supported on PowerPC, running those applications under Rosetta would
provide a quick way to get around encryption for the native versions of

It is worth pointing out that while Darwin, the underlying OS, is
open source, no part of the GUI code, or Rosetta, or any of
Apple's applications, have ever been open source.

-- Jerry
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] Apple Places Encrypted Binaries in Mac OS X

2006-11-03 Thread Leichter, Jerry
BTW, an interesting fact has been pointed out by Amit Singh, author
of a book describing Mac OS X internals:  The first generation of
x86-based Mac's - or at least some of them - contained a TPM chip
(specifically, the Infineon SKB 9635 TT 1.2.  However, Apple
never used the chip - in fact, they didn't even provide a driver
for it.  It certainly was not used in generating the encrypted
binaries.  Proof?  The most recent revision of the Macbook Pro
does *not* contain a TPM chip.

So in fact Apple is not using the TPM to certify a machine as
being real Apple hardware.  Presumably one can hack out the
decryption key - it's in the software somewhere

-- Jerry

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] Why Shouldn't I use C++?

2006-11-01 Thread Leichter, Jerry
| From time to time on this list, the recommendation is made to never
| user C++ when given a choice (most recently by Crispin Cowan in the
| re-writing college books thread). This is a recommendation I do not
| understand. Now, I'm not an expert C++ programmer or Java or C#
| programmer and as you may have guessed based on the question, I'm not
| an expert on secure coding either. I'm also not disagreeing with the
| recommendation; I would just like a better understanding.
| I understand that C++ allows unsafe operations, like buffer overflows.
| However, if you are a halfway decent C++ programmer buffer overflows
| can easily be avoided, true? If you use the STL containers and follow
| basic good programming practices of C++ instead of using C-Arrays and
| pointer arithmetic then the unsafe C features are no longer an issue?
| C and C++ are very different. Using C++ like C is arguable unsafe, but
| when it's used as it was intended can't C++ too be considered for
| secure programming?
You could, in principle, produce a set of classes that defined a
safe C++ environment.  Basically, you'd get rid of all uses of
actual pointers, and all actual arrays.  Objects would be manipulated
only through smart pointers which could only be modified in limited
ways.  In particular, there would be no way to do arithmetic on such
pointers (or maybe the smart pointers would be fat, containing
bounds information, so that arithmetic could be allowed but checked), 
arbitrarily cast them, etc.  Arrays would be replaced by objects with
[] operators that checked bounds.

The STL would be an unlikely member of such a world:  It models
iterators on pointers, and they inherit many of the latter's lack
of safety.  (There are checked versions of the STL which might
help provide the beginnings of a safe C++ environment.)

You'd have to wrap existing API's to be able to access them safely.
All the standard OS API's (Unix and Windows), everything in the C
library, most other libraries you are going to find out there - these
are heavily pointer-based.  Strings are often passed as char*'s.
Note that you can't even write down a non-pointer-based C++ string

But wait, there's more!  For example, we've seen attacks based on
integer overflow.  C++ doesn't let you detect that.  If you really want
your integer arithmetic to be safe, you can't use any of the native
integer types.  You need to define wrapper types that do something
safe in case of an overflow.  (To be fair, even many safe languages,
Java among them, don't detect integer overflow either.)

By the time you were done, you would in effect have defined a whole new
language and set of libraries to go with it.  Programming in that
language would require significant discipline:  You would not be allowed
to use most common C++ data types, library functions, idioms, and so on.
You'd probably want to use some kind of a tool to check for conformance,
since I doubt even well-intentioned human beings would be able to do so.

It happens that C++ has extension facilities that are powerful enough
to let you do most, perhaps all, of this, subject to programmer
discipline.  But is there really an advantage, when all is said and
-- Jerry
[Who uses C++ extensively, and
 does his best to make his
 abstractions safe]

| Ben Corneau
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] Secure programming is NOT just good programming

2006-10-12 Thread Leichter, Jerry
|  The only way forward is by having the *computer* do this kind of
|  thing for us.  The requirements of the task are very much like those
|  of low-level code optimization:  We leave that to the compilers today,
|  because hardly anyone can do it well at all, much less competitively
|  with decent code generators, except in very special circumstances.
|  Code inspection tools are a necessary transitional step - just as
|  Purify-like tools are an essential transitional step to find memory
|  leaks in code that does manual storage management.  But until we can
|  figure out how to create safer *languages* - doing for security what
|  garbage collection does for memory management - we'll always be
|  several steps behind.
| It is not adequate to *create* safer languages - it is necessary to
| have developers *use* those languages.  Given the emphasis on C and
| C++ within posts on this list, that seems a long way off.
Fifteen years ago, the idea that a huge portion of new software would be
developed in garbage-collected languages with safe memory semantics
would have seemed a far-off dream.  But in fact we are there today,
with Java and C#, not to mention even higher-level from Python to Ruby.
(Of course, then you can go to PHP.  We know how secure that's turned
out to be... though that's not really a fair attack:  The attacks against
PHP are new style - directory traversals, XSS - and nothing out there
provides any inherent protection against them either.)

Yes, there is still *tons* of C and C++ code out there, and more is still
being developed.  But there are many places where you have to justify
using C++ rather than Java.  (There are good reasons, because neither
Java no C# are really quite there yet for many applications.)  

In any case, language is to a degree a misdirection on my part.  What
matters is not just the language, but the libraries and development
methodologies and the entire development environment.  Just as security
properties are *system* properties of the full system, good for
development of secure systems is a system property of the entire
development methodology/mechanism/environment.  C/C++ plus static
analysis plus Purify over test runs with good code coverage plus
automated fuzz generation/testing would probably be close to as good
as you can get today - not that any such integrated system exists
anywhere I know of.  Replacing some of the C/C++ libraries with
inherently safer versions could only help.

It's not that training in secure coding practices isn't important.
We're starting from such a low level today that this kind of low-lying
fruit must be picked.  I'm looking out beyond that.  I think security
awareness, on its own, will only get us so far - and it's not nearly
far enough.
-- Jerry

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] Retrying exceptions - was 'Coding with errors in mind'

2006-09-06 Thread Leichter, Jerry
| Oh, you mean like the calling conventions on the IBM Mainframe where a dump
| produces a trace back up the call chain to the calling program(s)?  Not to
| mention the trace stack kept within the OS itself for problem solving
| (including system calls or SVC's as we call them on the mainframe).   And
| when all else fails, there is the stand alone dump program to dump the whole
| system?
| Mainframes have been around for years.  It's interesting to see open
| systems take on mainframe characteristics after all this time
All these obsolete ideas.  Stack tracebacks.  Feh!

Years back at Smarts, a company since acquired by the EMC you see in
my email address, one of the things I added to the system was a set of
signal handlers which would print a stack trace.  The way to do this was
very non-uniform:  On Solaris, you had to spawn a standalong program
(but you got a stack trace of all threads).  On HPUX, there was a
function you could call in a system library.  On AIX (you'd think IBM,
of all vendors, would do better!) and Windows, we had to write this
ourselves, with varying degrees of OS support.  We also dump - shock!
horror! - the values in all the registers.  And we (try to) produce a
core dump.

My experience has been that of crashes in the field, 90% can be fully
analyzed based on what we've written to the log file.  Of the rest, some
percentage - this is harder to estimate because the numbers are lower; -
can be fully analyzed using the core dump.  The rest basically can't be
analyzed without luck and repetition.  (I used to say 80-90% for can be
analyzed from core file, but the number is way down now because (a)
we've gotten better at getting information into and out of the log files
- e.g., we now keep a circular buffer of messages, including those at
too low a severity level to be written to the log, and dump that as
part of the failure output); (b) the remaining problems are exactly the
ones that the current techniques fail to handle - we've fixed the
-- Jerry

Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -

Re: [SC-L] Coding with errors in mind - a solution?

2006-09-05 Thread Leichter, Jerry
[Picking out one minor point:]
| [Exceptions] can simplify the code because
| -as previously mentioned by Tim, they separate error handling from normal
| logic, so the code is easier to read (it is simpler from a human reader's
| perspective).  I have found bugs in my own code by going from error handling
| to exceptions -- it made the mistake plain to see.
I agree with this ... but:

- Many years ago, I worked in a language called BASIC PLUS.
(This was an extension to BASIC that DEC bought along
with the RSTS operating system.  It was actually a
surprisingly productive environment.  But the details
aren't really relevant here.)

  BASIC PLUS had a simple exception handling approach:  You
specified ON ERROR statement, which requested that
if any system-detected error occurred (and, in modern
terms, BASIC PLUS was a safe language, so *all* errors
were detected by the run-time system) then the given
statement was to be executed.  Almost universally,
statement was a GOTO to a particular line number.
At that line, you had access to the particular error
that occurred (and error number); the line number on
which it occurred; the current state of any variables;
and the ability to resume execution (with some funny
limitations).  This provided exactly the separation
of normal code flow from error handling that seems
like a fine idea ... until you try to do anything with
it other than logging the error and aborting at some
high level of abstraction.

  I wrote an incredibly hacky error handler and a function called
FNSET() that was used essentially as follows:

IF (FNSET(errs))
THENnormal code
ELSEerror path

  errs encoded the error numbers that would be handled by
error path; any errors not on the list took the
usual log-and-abort route.

  So ... this was essentially a try/catch block (which I don't
think I'd seen - this was in 1976 or thereabouts),
with the odd filip that you declared the error
conditions you handled in the try rather than in
the catch.  It worked very well, and supplanted
the old monolithic error handlers that preceded it.

  But notice that it moved the error handlers right up close
to the normal operational code.  Yes, it's not as
close as a test after every call - *some* degree of
separation is good.  Really, what I think matters
is that the error handling code live at the right
semantic level relative to the code that it's
covering.  It's fine for the try/catch to be three
levels of call up from where the throw occurs *if
it the semantics it reflects are those of the code
explicitly in its try block, not the semantics
three levels down*.  This is also what goes wrong
with a try block containing 5 calls, whose catch
block is then stuck with figuring out how far into
the block we got in order to understand how to unwind
properly.  *Those 5 calls together* form the semantic
unit being protected, and the catch should be written
at that level.  If it can't be, the organization of
the code needs to be re-thought.  (Notice that in
this case you can end up with a try/catch per function
call.  That's a bad result:  Returning a status
value and testing it would probably be more readable
than all those individual try/catches!)

- On a much broader level:  Consider the traditional place
where exceptions and errors occur - on an assembly
line, where the process has bugs, which are detected
by QA inspections (software analogue:  Assertions) which
then lead to rework (exception handling).  In the
manufacturing world, the lesson of the past 50 years
or so is that *this approach is fundamentally flawed*.
You shouldn't allow failures and then catch them later;
you should work to make failures impossible.

  Too much of our software effort is directed at better
expression (try/catch) and implementation (safe
languages, assertions, contract checking) of the
assume it will fail, send it back for rework
approach.  Much, much better is to aim for 

Re: [SC-L] Retrying exceptions - was 'Coding with errors in mind'

2006-09-01 Thread Leichter, Jerry
On Fri, 1 Sep 2006, Jonathan Leffler wrote:
| Pascal Meunier [EMAIL PROTECTED] wrote:
| Tim Hollebeek [EMAIL PROTECTED] wrote:
|  (2) in many languages, you can't retry or resume the faulting code.
|  Exceptions are really far less useful in this case.
| See above.  (Yes, Ruby supports retrying).
| Bjorn Stroustrup discusses retrying exceptions in Design and Evolution of 
| C++ (  In particular, he 
| described one system where the language supported exceptions, and after 
| some number of years, a code review found that there was only one 
| retryable exception left - and IIRC the code review decided they were 
| better off without it.  
This is an over-simplification of the story.  I may get some of the
details wrong, but  This dates back to work done at Xerox PARC in
the late 70's-early 80's - about the same time that Smalltalk was being
developed.  There was a separate group that developed system programming
languages and workstation OS's built on them.  For a number of years,
they worked on and with a language named Mesa.  Mesa had extensive
support for exceptions, including a resumption semantics.  If you think
of exceptions as a kind of software analogue to a hardware interrupt,
then you can see why you'd expect to need resumption.  And, indeed, the
OS that they built initially used resumption in a number of places.

What was observed by the guys working on the system was that the number
of uses of resumption was declining over time.  In fact, I vaguely
remember that when they decided to go and look, they were using
resumption in exactly two places in their entire code base - OS,
compilers, windowing system, various tools.  (Exceptions were used
extensively - but resumption hardly at all.)

It's not that anyone deliberately set out to remove uses of resumption.
Rather, as the code was maintained, it repeatedly turned out that
resumption was simply the wrong semantics.  It just did not lead to
reliable code; there were better, more understandable, more robust
ways to do things that, over time and without plan, came to dominate.

Somewhere, I still have - if I could ever find it in my garage - the
paper that described this.  (It was a retrospective on the effort,
almost certainly published as one of the PARC tech reports.)

Mesa was replaced by Cedar, which was a simpler language.  (Another
lesson was that many of the very complex coercion and automatic
conversion mechanisms that Mesa weren't needed.)  I think Cedar
discarded Mesa's exception resumption semantics.  Many of the people
involved in these projects later moved to DEC SRC, where they used
the ideas from Mesa and Cedar in Modula2+ and later Modula3 - a
dramatically simpler language than its older Xerox forbearers.  It
also used exceptions heavily - but never had resumption semantics.
Much of the basic design of Java and C# goes back to Modula3.

| How much are retryable exceptions really used, in 
| Ruby or anywhere else that supports them?
Not only how much are they used, but how well do they work?  Remember,
if you'd look at *early* instances of Mesa code, you might well have
concluded that they were an important idea.  It may well be that for
quick one-off solutions, resumption is handy - e.g., for building
advise-like mechanisms to add special cases to existing code.  But as
we should all know by now, what works in-the-small and in-the-quick is
not necessarily what works when you need security/reliability/
-- Jerry
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -