On 8/6/2014 15:27, Andy Bradford wrote:
Thus said Warren Young on Wed, 06 Aug 2014 12:22:45 -0600:
Out here in the normal software world, I think we are being
presumptuous to use the word "engineering." We usually cannot write an
equation to prove our software correct.
The earth trembles as Dijkstra rolls in his grave:
Show me a mathematically-rigorous method for proving the correctness of
any program involving indeterminacy, and we'll talk.
That's pretty much every program of consequence. Any program that opens
a TCP socket, shares a SQLite DB with another program, presents a GUI,
or uses threads is probably undecidable in an a priori fashion, because
you cannot usefully describe all of the input states.
I like my academic publication link better:
I don't recall if I found that one on my own or through the SQLite FAQ:
There have been heroic efforts to wall off indeterminacy, the most
famous of which is probably Haskell and its I/O monad, but that only
gets you so far.
I read an article in CACM a month or so ago, where the guy was basically
blasting all of the impure FP languages that are becoming popular,
drawing attention away from the likes of Haskell. (F#/OCaml, Clojure,
Scala, etc.) The author didn't name names, but you could feel his
vibrating agitation as he described all the problems that crop up when
you get away from mathematical purity.
True enough. We agree on the scope of the problem; it's huge.
The solution, however, is not to cut off all the impurity. That turns
computers into glorified calculators, REPLs that can't communicate with
the outside world except via the terminal interface.
Such systems *can* be proven mathematically correct. The thing is,
they're never going to push aside all of the impure things we actually
do with computers.
Another non-starter is to return computers to batch processing.
My Pascal prof tried to teach us that all software could be described as
input-process-output. If the input is a stack of cards in a known
format, we can formally test whether the output is appropriate for the
input. If we feed the same input back in again, we should get the same
output. We see this testing methodology today as unit testing, where we
try to turn each tiny piece of a large program into an i-p-o pipeline.
But, a pile of formally correct i-p-o pipelines doesn't magically
assemble into a formally correct aggregate program. And, not everything
is testable in that way in the first place. Witness all the marginally
useful GUI testing systems, for just one example. For another, network
I'll finish with my original premise: except in areas where software
development is just a way of doing physics or pure mathematics of one
sort or another, you probably are not doing engineering from the start.
This is why I prefer the term software development. We may not know
exactly where we are going or how to get there, but somehow we always
recognize it when we arrive.
fossil-users mailing list