Tim Churches wrote:

On Tue, 2004-04-20 at 11:10, Thomas Beale wrote:



Let's use the term "strongly-but-dynamically typed" instead of the pejorative "weakly typed" for Python, Ruby, Smalltalk etc, even though "weakly typed" is more common.

Firstly, if you send a message of the wrong type to a
strongly-but-dynamically typed language like Python, it **does** raise
an exception, but as you said, at runtime, not compile time. This is, as
Bruce Eckel says, a "Faustian bargain" - most programmers, when using
such languages, find they are trading off some compile-time checking for
much greater programming productivity.

The question is, is this bargain with the Devil a good one? Bruce Eckel
argues, far more elegantly and authoritatively than I can, that it is a
good bargain, because type errors are only a small class of possible
(and common) errors, and you need to test for all the other types of
errors anyway, whether you are using a statically strongly typed
language like Java or Eiffel or not. See
http://www.mindview.net/WebLog/log-0025


Interesting notes. Quoting:

This began with a 2-month love affair with Perl, which gave me productivity through rapid turnaround. (The affair was terminated because of Perl's reprehensible treatment of references and classes; only later did I see the real problems with the syntax.) Issues of strong-vs-weak typing were not visible with Perl, since you can't build projects large enough to see these issues and the syntax obscures everything in smaller programs.

I don't know who Bruce Eckel is, but I do have to say (and I think a lot of professional software engineers and formal methods people would back this up), that anyone who does not see the inherent problems of using Perl for build large software that is maintainable and re-usable after 15 minutes of reading the manual probably doesn't have that much experience in large systems. Not that I am condemning perl or its programmers - not at all - I use it for what it's good for - writing scripts that last until they break. Just an observation...

Let's take a hypothetical example. Let's say we are building a
medication ordering verification system. Vincristine and methotrexate
are two chemotherapy agents which are commonly used to treat leukaemia.
Vincristine is given intravenously but methotrexate is often given
intrathecally (into the cerebrospinal fluid) to kill leukaemic cells
which have migrated there. The two drugs are often administered to the
patient in the same chemotherapy session. Regrettably, a far from rare
mistake is to administer the vincristine intrathecally instead of, or
together with, the methotrexate - usually with fatal or devastating
results. Often the mistake is in the written (or computerised)
medication order - the doctor ordering it specifies "intrathecally" for
the methotrexate but forgets to specify "intravenously only" for the
vincristine. There are documented instances in which the order has been
completely wrong (reversed): "vincristine intrathecally, methotrexate
intravenously". Either way, an inexperienced nurse or junior doctor,
working under pressure, is then asked to administer the drugs, follows
the order to the letter and makes the terrible mistake.


Without even reading the next bit, I can already see where this is going - what we have here is a data error, and that's not in the same space as "model" errors. In the future, if not already, the correctness of the prescription would be guaranteed by a drug knowledge-base which knows about drugs and their correct/allowable routes. This kind of error isn't stopped by compile-time type checking at all, of course, compile-time checking is at a lower level - it helps to guarantee that the software's model of the interface to the drug knowledge-base (for example) is correct, e.g. it should be able to correctly discover that vincristine should _never_ be given intrathecally, and represent that fact correctly on the screen or wherever. So - it wouldn't be hard to imagine an error in this area which would be found out by compile-time checking, but it will almost always be to do with the "model" of something, not that actual data.

So how would compile-time static type checking help our medication
ordering system to detect such an ordering mistake? You could define
classes for the permissible routes of administration of each drug
object, but that's only one way to categorise drugs (you would also want
to consider drug interactions and also drug indications and
contra-indications etc etc etc), so you would need multiple, very
complex drug object class hierarchies and then make very extensive use
of multiple inheritance in order to catch such errors at compile time.


you just wouldn't do that (although lots of green programmers do) - knowledge should reside in knowledge bases, not in compiled software.

Given that new drugs are being added all the time, and indications etc
for existing drugs are constantly changing, I suspect that you would
need to re-compile such a system at least once a day to gain any benefit
from compile-time checking of static typing.


and that's one of the main reasons why;-)

Maybe that's a contrived example, but I can't help agreeing with Bruce
Eckel that real-world mistakes are often more complex or subtle than any
static typing system will detect, and thus you need to write tests,
tests and more tests anyway - sometimes build-time tests, but often
run-time tests (checks), so why not use a language which let's you write
such tests far more quickly?


many real world errors are indeed far too subtle for static type checking; that's not the problem. What I have never heard is that static typing prevents you writing tests quickly. Nor have I ever read that dynamically- or non-typed languages really have a substitute for compile-time type checking - it's just maths really - taking away types at compile-time means that you open the number of possible object interactions at runtime exponentially, and making inferences about which ones would be incorrect becomes uncomputable. There are two reasons why statically typed languages exist and are widely used, and this is one; the other is that there is (should be) a near direct translation between the model (e.g. some fairly abstract design diagram) and the code (i.e. the types in the model exist in the code as well). They're very important in many domains (if not all), and its why such languages will probably continue to be widely used.

I must admit I have never seen any attraction in dynamically typed languages (other than for scripting), although I have seen some very nice things in their IDEs (especially Visual Age). I'd love to be proved wrong!

- thomas


- thomas





Reply via email to