The programmer is neither the application architect nor the system
engineer.
In some cases he is. Either way, it doesn't matter. I'm not asking
the programmer to re-design the application, I'm asking them to just
program the design 'correctly' rather than 'with bugs'
Except that sometimes
At 4:21 PM -0400 4/11/05, Dave Paris wrote:
Joel Kamentz wrote:
Re: bridges and stuff.
I'm tempted to argue (though not with certainty) that it seems that the
bridge analogy is flawed
in another way --
that of the environment. While many programming languages have similarities
and many
David Crocker wrote:
3. Cross-site scripting. This is a particular form of HTML injection and would
be caught by the proof process in a similar way to SQL injection, provided that
the specification included a notion of the generated HTML being well-formed. If
that was missing from the
Nash wrote:
** It would be extremely interesting to know how many exploits could
be expected after a reasonable period of execution time. It seems that
as execution time went up we'd be less likely to have an exploit just
show up. My intuition could be completely wrong, though.
I would think
Pascal Meunier [EMAIL PROTECTED] writes
Do you think it is possible to enumerate all the ways all vulnerabilities
can be created? Is the set of all possible exploitable programming mistakes
bounded?
I believe that one can make a Turing machine halting argument to show
that this is impossible.
I would question you if you suggested to me that you always assume
to _NOT_ include 'security' and only _DO_ include security if
someone asks.
Security is not a single thing that is included or omitted.
Again, in my experience that is not true. Programs that are labelled
'Secure' vs
Or until you find a bug in your automated prover. Or, worse,
discover that a vulnerability exists despite your proof, meaning
that you either missed a loophole in your spec or your prover has a
bug, and you don't have the slightest idea which.
On that basis, can I presume that you believe