There are two kinds of "specialisation", "subtyping", "inheritance" 
(whatever you like to call it). One that relates to re-use of an 
existing definition during development, and the other that relates to 
substitutable behaviour at run-time.

It's nice during development to be able to construct types using the 
hard work you've already done before ("I'd like it like a video store 
rental but instead of DVDs, I'll be renting crutches"). And this could 
appear to be the motivation behind the cut-n-paste philosophy of the 
archetype editor. However, arbitrary cut-n-paste during type development 
may or may not lead to substitutability between the types at run-time. 
This is fine if you appreciate this distinction, but not fine if you 
don't.

Although it's formalised in different ways by different people, the 
informal definition of run-time substitutability is the "no surprises" 
rule. That is, if the user is expecting to interact with a P, then if 
someone at run-time substitutes an instance of type Q, then Q must 
behave like P so that the user is not surprised.

In some specific languages or systems, people can put rules on how types 
are developed at design-time that guarantee that the run-time 
no-surprise rule is ensured (e.g. the UML rules about generalisation). 
However, there may be ways to develop types outside of those design-time 
rules that still achieve no-surprises at run-time.

Therefore, it is not a good idea to argue about the validity of a 
"specialisation" based on what extension or restriction you made at 
design-time. When the rubber hits the road, it is only run-time 
substitutability that counts.

Generally the richer the expressive power of the modelling language, the 
harder it is to come up with ways to specialise definitions that yield 
run-time substitutability. For example, if your modelling language is 
purely based on static typing, then you have a less rich specification 
and therefore the user's expectations of run-time behaviour are lower 
and anything that offers the right type of parameters will be 
substitutable. When you have a richer specification language with pre- 
and post-conditions on operations, then this increases the user's 
understanding of type P and hence makes it harder to substitute other 
types. If you include arbitrary constraints in the modelling language, 
it becomes very hard for other types to substitute.

Just as a simple example, consider a definition of a Rectangle

Rectangle {
    length : real;
    width: real;
    makeLarger() // doubles the length and the width
    makeSmaller () // halves the length and the width
}

If I introduce a specialisation Square with the restriction that length 
= width, does this satisfy run-time substitutability? Yes, it does.

But if my definition of Rectangle included another operation

    stretch () // double the length, width is unchanged

then Square would not satisfy run-time substitutability because it can't 
stretch.

Generally unless a supertype is defined initially with a good 
understanding of the ways in which it might be later subtyped, the 
designer of the supertype will usually over-specify it (particularly if 
a rich modleling language is used), making it very difficult to extend 
in a run-time no-surprises way. So what tends to happen in practice is 
that people find a type definition Q that is more-or-less similar but 
not extensible directly as is because of some aspect of Q (lets call 
that aspect X). The developer then extracts the the non-X parts of Q 
into a new supertype type P and make the original type Q a 
specialisation of P that adds back the X-ness. Then the developer 
introduces their new type R another subtype of P but with its alternate 
flavour of X-ness (which might be no X-ness). So, in practice, re-use 
often occurs through generalisation rather than by specialisation (which 
isn't exactly the original O-O vision). In the case of Rectangle and 
Square above, one would extract all the methods of Rectangle that could 
maintain the length=width invariant into the supertype and move the 
methods that could not maintaint that invariant into the Rectangle 
subtype. Of course, later someone will realise that the new supertype 
still insists on right-angles in the 4 corners and will generalise it 
again to eliminate that aspect, and then someone else will decide that 
being 4-sided is an over-constraining limitation, etc. Such is the 
lifecycle of a type definition :-)

So, what this means in practice for EHRs is that developing a new 
archetype will often involve refactoring of existing archetypes (as I've 
described above). So then we have to ask ourselves whether the new 
definition of archetype Q (now a specialisation of a more generalised 
class) is in fact substitutable for the old Q. Now providing you got the 
refactoring right, then probably the old Q is identical in its behaviour 
to the new Q (and both are substitutable for one another), but it's 
going to have a new version number etc so it will appear to be 
different. So the challenge isn't just to identify what re-use of an 
archetype leads to run-time substitutability but also to recognise that 
certain redefinition of an archetype don't break run-time 
substitutability.

Kerry

Dr Kerry Raymond
Distinguished Research Leader
CRC for Enterprise Distributed Systems Technology
University of Queensland 4072 Australia
Ph: +61 7 3365 4310, Fax: +61 7 3365 4311, www.dstc.edu.au





-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org

Reply via email to