[SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-10 Thread dtalk-ml
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Margus Freudenthal wrote:
Consider the bridge example brought up earlier. If your bridge builder
finished the job but said: "ohh, the bridge isn't secure though. If
someone tries to push it at a certain angle, it will fall".
Ultimately it is a matter of economics. Sometimes releasing something earlier 
is worth more than the cost of later patches. And managers/customers are aware 
of it.
Unlike in the world of commercial software, I'm pretty sure you don't 
see a whole lot of construction contracts which absolve the architect of 
liability for design flaws.  I think that is at the root of our 
problems.  We know how to write secure software; there's simply precious 
little economic incentive to do so.

- --
David Talkington
[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (FreeBSD)
iD8DBQFCV24Q5FKhdwBLj4sRAoC9AKCb6j5dKOLgFwDMuVa8giSbMvmW2gCfdwn7
QcS6J7NVPFsISzhLoBgQWHM=
=0ZSy
-END PGP SIGNATURE-



RE: [SC-L] Application Insecurity --- Who is at Fault?

2005-04-10 Thread Yousef Syed
Hi, 
There are a number of different players here with a number of points of
view. 
Final End user (e.g. User of Windows XP).
Corporation requesting a particular piece of software to be developed. 
The Company where the actual software is being developed: Managers and
Programmers. 

As for the final End User, then they have little if any say in the arena of
software security and will generally just take whatever they are given and
expect it to work properly. Nor are the usually educated to enough to make
an enlightened choice in the matter between two competing brands. E.g Their
Dell Computer shipped with MS XP etc... The governments are the ones that
should be protecting these guys. They don't understand click-through
user-licenses; they've never read them and nor would they ever - the users
aren't lawyers. 
The more intelligent users can choose with their wallets, and rest of us can
educate the poor ignorant masses...

Corporate: If you are in a Bank, then you have to make sure that the
agreement that you sign with whoever is tasked with developing the software,
incorporates User Acceptance Testing that includes tests for security -
preferably carried out by a third-party company. Security should be assumed
and expected as part of any application's requirements, and as such, should
always be included in systems testing phase; Unit testing or otherwise. This
is one of the few environments where security is an issue and would be given
some weight, but mostly under certain circumstances. (Sarbanes-Oxley has
suddenly lit a fire under the shorts of a number of our clients - had their
not been a significant dollar value attached to achieving compliance, I
doubt many of the banks would have bothered spending so much of their money
in this area). Funnily enough, SOX isn't from the government, rather from
the SEC!

Software Managers. These guys answer to the guy holding the purse strings.
That said, there are many ways to ensure that the system is relatively
secured, even if the purse strings are tight. The first is when it is
realized the security isn't an add-on, or a feature; it should be inherent
in the product. Thus, when Architecting, architect securely. When designing,
design securely. Ensure that code is written securely (standard code reviews
should spot these problems just as they'll spot other problems). And testing
phases should incorporate security tests. If a manger delivers a product on
time, on budget, but with a few security flaws (which aren't noticed until
the app has been out in the open for 6 months), then he has retained his job
and probably got a promotion. 


Coders. Lowest on the food chain these days, and thus, with the least
incentive to work that little bit harder to produce "proper" code. Highly
unlikely to put any extra effort into producing better code unless it is
required by a manager. 9/10 they simply want to shift their workload. 
Conscientious and professional developers should still be producing proper
code. Bad code is a product of laziness, apathy and incompetence. There is
plenty of incompetence out there. That is where training comes into the
picture. However, the coder knows that if they deliver late, they will get
sacked; if they deliver on time, but a little buggy, they'll retain their
jobs.

Personally, I think that most of the problems will go away once Security is
no longer viewed as an add-on feature to a product. Secondly, when financial
liability for security breaches are passed on, people will start taking
note. Money talks! 

Another thing would be to name and shame those corporations that have their
systems hacked and lose countless Credit card details. It would be nice if
the SEC would demand full and prompt disclosure in the event of a security
breach and instant multi-million dollar fines for the slack corporations.
When corporations are made to feel the burden of their slack security, then
they'll take it seriously... maybe...

Ys
--
Yousef Syed


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Michael Silk
Sent: 06 April 2005 23:45
To: Dave Paris
Cc: Secure Coding Mailing List
Subject: Re: [SC-L] Application Insecurity --- Who is at Fault?

Inline

On Apr 7, 2005 1:06 AM, Dave Paris <[EMAIL PROTECTED]> wrote:
> And I couldn't disagree more with your perspective, except for your
> inclusion of managers in parenthesis.
> 
> Developers take direction and instruction from management, they are not
> autonomous entities.  If management doesn't make security a priority,

See, you are considering 'security' as something extra again. This is
not right.

My point is that management shouldn't be saying 'Oh, and don't forget
to add _Security_ to that!' The developers should be doing it by
default.


> then only so much secure/defensive code can be written before the
> developer is admonished for being slow/late/etc.

Then defend yourself ... ! Just as you would if the project was too
large due to other reasons. Don't allow sec

Re: [SC-L] [ot] Application Insecurity --- Who is at Fault?

2005-04-10 Thread Pete Shanahan
Julie JCH Ryan, D.Sc. wrote:
This is a little off topic, but I'm wondering if anyone would like to
comment.
I'll bite, but as I'm not American, you'll have to take my comments with a 
grain
of salt.
Firstly, chastise your student for using the word factoid, it's a fact, a
factoid is untrue. I know this is pedantic, but the harassment I got from my
compiler lecturer about the differences between brackets, braces and parentheses
kind of stuck to me. [ eats, shoots, and leaves ]
The supposition that students have lost their edge is because they do not enter
programming competitions is a poor argument - probably over 20 of the
universities listed in the 76 entries on this year's competition were American,
which seems to be a reasonable percentage, considering that significantly less
than that are of western European origin, I'd say that you've got good odds. I
am aware that this is just throwing some arbitrary statistics at the posting for
this year's results. [ lies, damned lies and statistics ]
The challenge is not in the programming, it's in the problem solving, and the
fact that less American students are winning it should be addressed by
questioning the motivations of those attending, I know for a fact that if I was
offered the opportunity to go to China to attend a programming competition I
would leap at the chance, knowing full well that I would probably not place very
well, but I'd have a damned fun time there. [ the junket argument ]
I think that the duration of the challenge is fair, and as this is a pseudo
exam-like system, the unavailability of the internet is only fair. Being made to
'memorize' things is not the issue, the students are expected to have a good
grounding in all the topics that are likely to turn up in the competition, and
that by having a team you are expecting that they should be capable of at least
doing some forward research into the likely topics so that they don't get caught
short.
Real world programming is for the most part boring - I can count on one hand the
times I've used really interesting algorithms in my work; and I've been working
in what would be for a software engineer a really interesting field. Having a
programming competition that emphasizes mathematical style problems makes it a
fun challenge, without the 'math type' problems, then what would we expect to
see? I for one would lament the loss of a true challenge.
As for the 'have the US programmers lost their lead?' I'd have to say yes, they
have, but that's only because they're now a smaller piece of a much larger pie.
--
Pete +353 (87) 412 9576 [M] | +353 (1) 235 4027 [H]
Boston, n.:
Ludwig van Beethoven being jeered by 50,000 sports fans for
finishing second in the Irish jig competition.



[SC-L] Theoretical question about vulnerabilities

2005-04-10 Thread Pascal Meunier
Do you think it is possible to enumerate all the ways all vulnerabilities
can be created?  Is the set of all possible exploitable programming mistakes
bounded?

I would think that what makes it possible to talk about design patterns and
attack patterns is that they reflect intentional actions towards "desirable"
(for the perpetrator) goals, and the set of desirable goals is bounded at
any given time (assuming infinite time then perhaps it is not bounded).
However, once commonly repeated mistakes have been described and taken into
account, I have a feeling that attempting to enumerate all other possible
mistakes (leading to exploitable vulnerabilities), for example with the goal
of producing a complete taxonomy, classification and theory of
vulnerabilities, is not possible.  All we can hope is to come reasonably
close and produce something useful, but not theoretically strong and closed.

This should have consequences for source code vulnerability analysis
software.  It should make it impossible to write software that detects all
of the mistakes themselves.  Is it enough to look for violations of some
invariants (rules) without knowing how they happened?

Any thoughts on this?  Any references to relevant theories of failures and
errors, or to explorations of this or similar ideas, would be welcome.  Of
course, Albert Einstein's quote on the difference between genius and
stupidity comes to mind :).

Thanks,
Pascal Meunier


[SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-10 Thread ljknews
At 10:54 PM -0700 4/8/05, [EMAIL PROTECTED] wrote:
>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA1
>
>Margus Freudenthal wrote:
>
>>> Consider the bridge example brought up earlier. If your bridge builder
>>> finished the job but said: "ohh, the bridge isn't secure though. If
>>> someone tries to push it at a certain angle, it will fall".
>>
>> Ultimately it is a matter of economics. Sometimes releasing something 
>> earlier is worth more than the cost of later patches. And managers/customers 
>> are aware of it.
>
>Unlike in the world of commercial software, I'm pretty sure you don't see a 
>whole lot of construction contracts which absolve the architect of liability 
>for design flaws.

But there is plenty that leaves those involved an opportunity to litigate
their way out.  Consider Boston's Big Dig.
-- 
Larry Kilgallen




RE: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-10 Thread Edward Rohwer
 I my humble opinion, the bridge example gets to the heart of the
matter. In the bridge example the bridge would have been design and
engineered by licensed professionals, while we in the software business
sometime call ourselves "engineers" but fall far short of the real,
professional, licensed engineers other professions depend upon.  Until we as
a profession are willing to put up with that sort of rigorous examination
and certification process, we will always fall short in many area's and of
many expectations.

Ed. Rohwer CISSP

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of [EMAIL PROTECTED]
Sent: Friday, April 08, 2005 10:54 PM
To: Margus Freudenthal
Cc: Secure Coding Mailing List
Subject: [SC-L] Re: Application Insecurity --- Who is at Fault?

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Margus Freudenthal wrote:

>> Consider the bridge example brought up earlier. If your bridge builder
>> finished the job but said: "ohh, the bridge isn't secure though. If
>> someone tries to push it at a certain angle, it will fall".
>
> Ultimately it is a matter of economics. Sometimes releasing something
earlier 
> is worth more than the cost of later patches. And managers/customers are
aware 
> of it.

Unlike in the world of commercial software, I'm pretty sure you don't 
see a whole lot of construction contracts which absolve the architect of 
liability for design flaws.  I think that is at the root of our 
problems.  We know how to write secure software; there's simply precious 
little economic incentive to do so.

- --
David Talkington
[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (FreeBSD)

iD8DBQFCV24Q5FKhdwBLj4sRAoC9AKCb6j5dKOLgFwDMuVa8giSbMvmW2gCfdwn7
QcS6J7NVPFsISzhLoBgQWHM=
=0ZSy
-END PGP SIGNATURE-





Re: [SC-L] Theoretical question about vulnerabilities

2005-04-10 Thread Crispin Cowan
Pascal Meunier wrote:
Do you think it is possible to enumerate all the ways all vulnerabilities
can be created?  Is the set of all possible exploitable programming mistakes
bounded?
 

Yes and no.
Yes, if your enumeration is "1" and that is the set of all errors that 
allow an attacker to induce unexpected behavior from the program.

No is the serious answer: I do not believe it is possible to enumerate 
all of the ways to make a programming error. Sure, it is possible to 
enumerate all of the *commonly observed* errors that cause wide-spread 
problems. But enumerating all possible errors is impossible, because you 
cannot enumerate all programs.

I would think that what makes it possible to talk about design patterns and
attack patterns is that they reflect intentional actions towards "desirable"
(for the perpetrator) goals, and the set of desirable goals is bounded at
any given time (assuming infinite time then perhaps it is not bounded).
 

Nope, sorry, I disbelieve that the set of attacker goals is bounded.
However, once commonly repeated mistakes have been described and taken into
account, I have a feeling that attempting to enumerate all other possible
mistakes (leading to exploitable vulnerabilities), for example with the goal
of producing a complete taxonomy, classification and theory of
vulnerabilities, is not possible.
I agree that it is not possible. Consider that some time in the next 
decade, a new form of programming or technology will appear. It will 
introduce a new kind of pathology. We know that this will happen because 
it has already happened: Web forums that allow end-user content to be 
posted resulted in the phenomena of Cross Site Scripting.

 All we can hope is to come reasonably
close and produce something useful, but not theoretically strong and closed.
 

Security is very simple. Only use perfect software :) For those who can 
afford it, perfect software is great. The rest of us will be fighting 
with insecure software forever.

This should have consequences for source code vulnerability analysis
software.  It should make it impossible to write software that detects all
of the mistakes themselves.
The impossibility of a perfect source code vulnerability detector is a 
corollary of Alan Turing's original Halting Problem.

 Is it enough to look for violations of some
invariants (rules) without knowing how they happened?
 

Looking for run-time invariant violations is a basic way of getting 
around static undecidability induced by Turing's theorem. Switching from 
static to dynamic analysis comes with a host of strengths and 
weaknesses. For instance, StackGuard (which is really just a run-time 
enforcement of a single invariant) can detect buffer overflow 
vulnerabilities that static analyzers cannot detect. However, StackGuard 
cannot detect such vulnerabilities until some attacker helpfully comes 
along and tries to exploit such a vulnerability.

Any thoughts on this?  Any references to relevant theories of failures and
errors, or to explorations of this or similar ideas, would be welcome.  Of
course, Albert Einstein's quote on the difference between genius and
stupidity comes to mind :).
 

"Reliable software does what it is supposed to do. Secure software does 
what it is supposed to do and nothing else." -- Ivan Arce

"Security is very simple. Only use perfect software :) For those who can 
afford it, perfect software is great. The rest of us will be fighting 
with insecure software forever." -- me :)

Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com