IMO the path to changing the dynamics for secure coding will reside in
the market, the courts, and the capacity of the software industry to
measure and test itself and to demonstrate the desired properties of
security, quality, and suitability for purpose.  In today's market we do
well in suitability for purpose (aka marketing then testing, pilot, and
purchase) but I believe we do poorly at security and quality.

Rather than try to "tax" software for being bad, it will likely end up
being more successful to reward them for being good in the form of
market support and sales.  That dynamic will probably work better, and
will empower the companies and individuals (who choose to get this
involved) to make the choices of security and quality against cost or
convenience.  The punishment for bad software is lost sales and eventual
loss of the use of that product within the market.

Software vendors will need a 3 tier approach to software security:  Dev
training and certification, internal source testing, external
independent audit and rating.  The open source version of this can be
the same, but applied more individually or at the "derivative product"
level (ie if I make a Linux based appliance from open source, I become
the owner of the issues in that Linux derivative)

The legal side will need to alter the EULA away from a "hold harmless"
model to one where vendors and software buyers can assert that the
software is expected to perform at certain security or quality levels,
and have a known backing from a legal recourse side.  Companies that
make/sell software and can be sued offer more recourse than open source,
but that's not better or worse just different.  The buyer can choose the
degree of security and quality rating (based on the audit etc) and the
legal recourse they want via the SLA or choice to use open source.  For
a high assurance system one might choose a software vendor with a
contract and SLA, while also choosing open source for lower assurance
efforts that come with less recourse but less cost too.  This opens a
market for companies who choose to "resell" open source, but provide the
support and assurances.  It also allows anyone to choose which factors
are important, and buy / use accordingly.  

If choice results in "downstream" impacts, then the deployer of the
software is initially accountable, and they must determine if they have
recourse via SLA or contract to their provider.  If they accepted risk
of a non-supported software package, then their deployment and the
ensuing harm is their responsibility.  If they have recourse, as the
saying goes "they pass the savings on".

Home users are also empowered if they choose to be, but overall they
gain in two major ways.  The market will be driven to more secure
software over time without their direct knowledge (as companies and
governments choose to require software to be more secure) and they can
benefit from any legal recourses that are available for notable security
failures or quality gaps. 

Using these factors anyone could make decisions based on the need for
recourse (courts), assurance (market), and quality (industry rating and
standards for security and quality) and come away with software that
meets their needs in each area, without excluding open or closed source
or leaving the corporate / consumer customers unprotected.

DISCLAIMER: Views are my own, and not those of my employer, and were
generated over a cup of coffee in a fairly stream of consciousness kind
of way.  Grains of salt not supplied, but are recommended when consuming
the contents. :)

-----Original Message-----
[mailto:[EMAIL PROTECTED] On Behalf Of Leichter, Jerry
Sent: Friday, November 30, 2007 6:28 AM
To: der Mouse
Subject: Re: [SC-L] Insecure Software Costs US $180B per Year -
Application and Perimeter Security News Analysis - Dark Reading

| >     Just as a traditional manufacturer would pay less tax by
| >     becoming "greener," the software manufacturer would pay less tax
| >     for producing "cleaner" code, [...]
| > One could, I suppose, give rebates based on actual field experience:
| > Look at the number of security problems reported per year over a
| > two-year period and give rebates to sellers who have low rates.
| And all of this completely ignores the $0 software "market".  (I'm
| carefully not saying "free", since that has too many other meanings,
| some of which have been perverted in recent years to mean just about
| the opposite of what they should.)  Who gets hit with tax when a bug
| is found in, say, the Linux kernel?  Why?
I'll answer this along my understanding of the lines of the proposal at
hand.  I have my doubts about the whole idea, for a number of reasons,
but if we grant that it's appropriate for for-fee software, it's easy
decide what happens with free software - though you won't like the
answer:  The user of the software pays anyway.  The cost is computed in
some other way than as a percentage of the price - it's no clear exactly
how.  Most likely, it would be the same tax as is paid by competing
non-free software with a similar security record.  (What you do when
there is no such software to compare against is an interesting problem
for some economist to work on.)

The argument the author is making is that security problems impose costs
on *everyone*, not just on the party running the software.  This is a
classic economic problem:  If a party can externalize its costs - i.e.,
dump them on other parties - its incentives become skewed.  Right now,
the costs of security problems for most vendors are externalized.
Where do they do?  We usually think of them as born by that vendor's
customers.  To the degree that's so, the customers will have an
incentive to push costs back on to the vendor, and eventually market
mechanisms will clean things up.  To some degree, that's happened to
Microsoft:  However effective or ineffective their security efforts,
it's impossible to deny that they are pouring large sums of money
into the problem.

To the degree that the vendors' customers can further externalize the
support onto the general public, however, they have no incentive to
push back either, and the market fails.  This is pretty much the case
for personal, as opposed to corporate, users of Microsoft's software.
Imposing a tax is the classic economic answer to such a market failure.
The tax's purpose is (theoretically) to transfer the externalized costs
back to those who are in a position to respond.  In theory, the cost
for security problems - real or simply possible; we have to go with
the latter because by the time we know about the former it's very
late in the game - should be born by those who develop the buggy
code, and by those who choose to use it.  A tax on the code itself
directly takes from the users of the code, indirectly from the
vendors because they will find it more difficult to compete with
vendors who pay lower tax rates, having written better code.  It's
much harder to impose the costs directly on the vendors.  (One way
is to require them to carry insurance - something we do with, say,
trucking companies).

In any case, these arguments apply to free software in exactly the same
way they do for for-fee software.  If I cars away for free, should I be
absolved of any of the costs involved if they pollute, or cause
accidents?  If I'm absolved, should the recipients of those cars also be
absolved?  If you decide the answer is "yes", what you've just decided
is that *everyone* should pay a hidden tax to cover those costs.  In
what why is *that* fair?
                                                        -- Jerry
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC
as a free, non-commercial service to the software security community.
Secure Coding mailing list (SC-L)
List information, subscriptions, etc -
List charter available at -
SC-L is hosted and moderated by KRvW Associates, LLC (
as a free, non-commercial service to the software security community.

Reply via email to