There are a number of different players here with a number of points of
Final End user (e.g. User of Windows XP).
Corporation requesting a particular piece of software to be developed. 
The Company where the actual software is being developed: Managers and

As for the final End User, then they have little if any say in the arena of
software security and will generally just take whatever they are given and
expect it to work properly. Nor are the usually educated to enough to make
an enlightened choice in the matter between two competing brands. E.g Their
Dell Computer shipped with MS XP etc... The governments are the ones that
should be protecting these guys. They don't understand click-through
user-licenses; they've never read them and nor would they ever - the users
aren't lawyers. 
The more intelligent users can choose with their wallets, and rest of us can
educate the poor ignorant masses...

Corporate: If you are in a Bank, then you have to make sure that the
agreement that you sign with whoever is tasked with developing the software,
incorporates User Acceptance Testing that includes tests for security -
preferably carried out by a third-party company. Security should be assumed
and expected as part of any application's requirements, and as such, should
always be included in systems testing phase; Unit testing or otherwise. This
is one of the few environments where security is an issue and would be given
some weight, but mostly under certain circumstances. (Sarbanes-Oxley has
suddenly lit a fire under the shorts of a number of our clients - had their
not been a significant dollar value attached to achieving compliance, I
doubt many of the banks would have bothered spending so much of their money
in this area). Funnily enough, SOX isn't from the government, rather from
the SEC!

Software Managers. These guys answer to the guy holding the purse strings.
That said, there are many ways to ensure that the system is relatively
secured, even if the purse strings are tight. The first is when it is
realized the security isn't an add-on, or a feature; it should be inherent
in the product. Thus, when Architecting, architect securely. When designing,
design securely. Ensure that code is written securely (standard code reviews
should spot these problems just as they'll spot other problems). And testing
phases should incorporate security tests. If a manger delivers a product on
time, on budget, but with a few security flaws (which aren't noticed until
the app has been out in the open for 6 months), then he has retained his job
and probably got a promotion. 

Coders. Lowest on the food chain these days, and thus, with the least
incentive to work that little bit harder to produce "proper" code. Highly
unlikely to put any extra effort into producing better code unless it is
required by a manager. 9/10 they simply want to shift their workload. 
Conscientious and professional developers should still be producing proper
code. Bad code is a product of laziness, apathy and incompetence. There is
plenty of incompetence out there. That is where training comes into the
picture. However, the coder knows that if they deliver late, they will get
sacked; if they deliver on time, but a little buggy, they'll retain their

Personally, I think that most of the problems will go away once Security is
no longer viewed as an add-on feature to a product. Secondly, when financial
liability for security breaches are passed on, people will start taking
note. Money talks! 

Another thing would be to name and shame those corporations that have their
systems hacked and lose countless Credit card details. It would be nice if
the SEC would demand full and prompt disclosure in the event of a security
breach and instant multi-million dollar fines for the slack corporations.
When corporations are made to feel the burden of their slack security, then
they'll take it seriously... maybe...

Yousef Syed

-----Original Message-----
Behalf Of Michael Silk
Sent: 06 April 2005 23:45
To: Dave Paris
Cc: Secure Coding Mailing List
Subject: Re: [SC-L] Application Insecurity --- Who is at Fault?


On Apr 7, 2005 1:06 AM, Dave Paris <[EMAIL PROTECTED]> wrote:
> And I couldn't disagree more with your perspective, except for your
> inclusion of managers in parenthesis.
> Developers take direction and instruction from management, they are not
> autonomous entities.  If management doesn't make security a priority,

See, you are considering 'security' as something extra again. This is
not right.

My point is that management shouldn't be saying 'Oh, and don't forget
to add _Security_ to that!' The developers should be doing it by

> then only so much secure/defensive code can be written before the
> developer is admonished for being slow/late/etc.

Then defend yourself ... ! Just as you would if the project was too
large due to other reasons. Don't allow security to be 'cut off'.
Don't walk in and say 'Oh, I was just adding "security" to it,'. A
manager will immediately reply: "Oh, we don't care about that...".
Instead say: "Still finishing it off..". (This _has_ worked for me in
the past, by the way...)

> While sloppy habits are one thing, it's entirely another to have
> management breathing down your neck, threatening to ship your job
> overseas, unless you get code out the door yesterday.

Agreed. (Can't blame consumers for this issue, however..)

> I'm talking
> about validation of user input, 

This is something that all programmer should be doing in _ANY_ type of
program. You need to handle input correctly for your app to function
correctly, otherwise it will crash with a dopey user.

> ensuring a secure architecture to begin
> with, and the like.  

'Sensible' architecture too, though. I mean, that's the whole point of
a design - it makes sense. For example, an app may let a user update
accounts based on ID's, but it doesn't check if the user actually owns
the ID of the account they are updating. They assume it's true because
they only _showed_ them ID's they own.

You'd hope that your 'sensible' programmer would note that and confirm
that they did, indeed, update the right account. Not only for security
purposes, but for consistency of the _system_. The app just isn't
doing what it was 'specified' to do if the user can update any
account. It's _wrong_ - from a specification point of view - not just

You would, I guess, classify this as something the managers/consumers
need to explicitly ask for. To me, it seems none of their business. As
a manager, you don't want to be micromanaging all these concepts (but
we are - CIO's...) they should be the sole responsibility of the
programmer to get right.

> The later takes far more time to impliment than is
> given in many environments.  The former requires sufficient
> specifications be given upfront 


-- Michael

> Michael Silk wrote:
> > Quoting from the article:
> > ''You can't really blame the developers,''
> >
> > I couldn't disagree more with that ...
> >
> > It's completely the developers fault (and managers). 'Security' isn't
> > something that should be thought of as an 'extra' or an 'added bonus'
> > in an application. Typically it's just about programming _correctly_!
> >
> > The article says it's a 'communal' problem (i.e: consumers should
> > _ask_ for secure software!). This isn't exactly true, and not really
> > fair. Insecure software or secure software can exist without
> > consumers. They don't matter. It's all about the programmers. The
> > problem is they are allowed to get away with their crappy programming
> > habits - and that is the fault of management, not consumers, for
> > allowing 'security' to be thought of as something seperate from
> > 'programming'.
> >
> > Consumers can't be punished and blamed, they are just trying to get
> > something done - word processing, emailing, whatever. They don't need
> > to - nor should. really. - care about lower-level security in the
> > applications they buy. The programmers should just get it right, and
> > managers need to get a clue about what is acceptable 'programming' and
> > what isn't.
> >
> > Just my opinion, anyway.
> >
> > -- Michael
> [...]

Reply via email to