Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-14 Thread Michael Silk
I don't think that analogy quite fits :) If the 'grunts' aren't doing
their job, then yes - let's blame them. Or at least help them find
ways to do it better.

-- Michael

[Ed. Let's consider this the end of the thread, please.  Unless someone
wants to say something that is directly relevant to software security,
I'm going to let it drop.  KRvW]

On 4/13/05, Dave Paris [EMAIL PROTECTED] wrote:
 So you blame the grunts in the trenches if you lose the war?  I mean,
 that thinking worked out so well with Vietnam and all...  ;-)

 regards,
 -dsp

  I couldn't agree more! This is my whole point. Security isn't 'one
  thing', but it seems the original article [that started this
  discussion] implied that so that the blame could be spread out.
 
  If you actually look at the actual problems you can easily blame the
  programmers :)




Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-14 Thread Dave Paris
Michael Silk wrote:
I don't think that analogy quite fits :) If the 'grunts' aren't doing
their job, then yes - let's blame them. Or at least help them find
ways to do it better.
If they're not doing their job, no need to blame them - they're
critically injured, captured, or dead. ...or in the case of programmers
- fired.  If you insist on blaming them, you're redirecting blame and
that's BS.
As for finding ways to do it better .. they're well trained - if
they're not well trained, they're (again) critically injured, captured,
or dead.  But as happened in the most recent event in the big sandbox,
they're not well supplied in all cases.  Wow.  Sound familiar?  What?  A
programmer not given full specifications or the tools they need?  Yeah.
 That never happens in the Corporate World.
The analogy works.
Some comparisons:
You call in for close air support .. and friendlies drop munitions on
your position (your manager just told the VP yeah, we can ship two
weeks early, no problems).
You call in for intel on your position and you're told the path to your
next objective is clear - only to get ambushed as you're halfway there
(the marketing guys sold the customer a bill of goods that can't
possibly be delivered in the time alloted - and your manager agreed to
it without asking the programmers)
You're recon and you light up a target with a laser designator and then
call in the bombers - only to find they can't drop the laser-guided
munitions because friendlies just blew up the nearby fuel depot and
now they can't get a lock on the designator because of the smoke (sorry,
you can't get the tools you need to do your job so make due with what
you've got - nevermind that the right tool is readily available - i.e.
GPS-guided munitions in this example - it's just not supplied for this
project).
.. ok, enough with the examples, I hope I've made my point.
Mr. Silk, it's become quite clear to me from your opinions that you
appear to live/work in a very different environment (frankly, it sounds
somewhat like Nirvana) than the bulk of the programmers I know.
Grunts and programmers take orders from their respective chain of
command.  Not doing so with get a grunt injured, captured, or killed and
a programmer fired.  Grunts and programmers each come with a skillset
and a brain trained and/or geared to accomplishing the task at hand.
Experience lets them accomplish their respective jobs more effectively
and efficiently by building on that training - but neither can disregard
the chain of command without repercussions (scantions, court martial,
injury, or death in the case of a grunt - and demotion or firing in the
case of a programmer).  If the grunt or programmer simply isn't good at
their job, and the chain of command doesn't move them to a more
appropriate position, they're either dead or fired.
Respectfully,
-dsp


RE: [SC-L] Theoretical question about vulnerabilities

2005-04-14 Thread David Crocker
Crispin wrote:

 Here's an example of a case it cannot prove:

if X then
Y - initial value
endif
...
if X then
Z - Y + 1
endif

The above code is correct in that Y's value is taken only when it has
been initialized. But to prove the code correct, an analyzer would have
to be flow sensitive, which is hard to do.


The whole science of program proving is based on exactly this sort of flow
analysis. The analyser will postulate that Y is initialised at the assignment to
Z, given the deduced program state at that point. This is called a verification
condition or proof obligation. It can then attempt to prove the hypothesis;
for this example, the proof is trivial. I guess it would have been more
difficult 20 years ago when Hermes was written.

The alternative solution to the problem of uninitialised variables is for the
language or static checker to enforce Java's definite initialisation rule or
something similar. This at least guarantees predictable behaviour.

An issue arises when a tool like Perfect Developer is generating Java code,
because even though PD can prove that Y is initialised before use in the above
example, the generated Java code would be violate the definite initialisation
rule. We have to generate dummy initialisations in such cases.

David Crocker, Escher Technologies Ltd.
Consultancy, contracting and tools for dependable software development
www.eschertech.com



RE: [SC-L] Theoretical question about vulnerabilities

2005-04-14 Thread David Crocker
Crispin Cowan wrote:


Precisely because statically proven array bounds checking is Turing Hard, that
is not how such languages work.

Rather, languages that guarantee array bounds insert dynamic checks on every
array reference, and then use static checking to remove all of the dynamic
checks that can be proven to be unnecessary. For instance, it is often the case
that a tight inner loop has hard-coded static bounds, and so a static checker
can prove that the dynamic checks can be removed from the inner loop, hoisting
them to the outer loop and saving a large proportion of the execution cost of
dynamic array checks.


Well, that approach is certainly better than not guarding against buffer
overflows at all. However, I maintain it is grossly inferior to the approach we
use, which is to prove that all array accesses are within bounds. What exactly
is your program going to do when it detects an array bound violation at
run-time? You can program it to take some complicated recovery action; but how
are you going to test that? You can abort the program (and restart it if it is a
service); but then all you have done is to turn a potential security
vulnerability into a denial of service vulnerability.

So the better approach is to design the program so that there can be no buffer
overflows; and then verify through proof (backed up by testing) that you have
achieved that goal.

David Crocker, Escher Technologies Ltd.
Consultancy, contracting and tools for dependable software development
www.eschertech.com