Grahame,

One of the theoretical difficulties of the debate that you are contributing 
so eloquently to is to separate methods proper to computing from methods 
defined by domain specificity.
Object-oriented programming, modular design, abstract data types, etc. are 
fundamental principles whose existence does not follow from any particular 
domain features. They lead to better information modelling in terms of 
architecture, computational semantics and have been adopted as the best 
engineering methodology to create information processing systems.
The abstractions that bridge the gap between computing and domain-driven 
modelling have to be able to represent in machine processable form the 
business objects we are dealing with. What you are doing in this case is 
less important than what you are doing it to. The marriage of the two 
strands does not deny their individuality but produces an offspring that 
blends their essential features in an organic way.

openEHR contains RECORD in its name as the basic abstraction. The RECORD is 
a container structure that has well defined computational semantics - data 
entry is structured in the form of compositions that may be of different 
type (instruction, observation, etc.) because of differences in computing 
requirements - data representation, etc.
Thus we have a model of a record (document) management system which at its 
highest level is generic. However, its type system is geared towards the 
faithful representation of information in the health care domain with the 
goal of producing a shared longitudinal patient record.

HL7 as the name attests is based on the messaging paradigm. The MESSAGE has 
been historically the modelling abstraction that defines HL7.  Thus, we have 
to ask ourselves, which root abstraction fits better the twin criteria of 
clear and purposeful computational semantics and faithful representation of 
domain specifics.

You said:

> ok, we have real convergence here.
> OpenEHR works exactly like HL7 - define a reference model
> with all the needed semantics, and then refine things away
> in constraint models (and use the refinements as a basis for
> composition). So the principle is the same.

(I have the feeling that you are thinking of HL 7 V4.0)
Convergence is a really strong statement. I understand your motivation and 
your honest desire to overcome the gap between openEHR and HL7. However, one 
has to be equally honest in appraising the
two approaches and their results.

openEHR doesn't "work exactly like HL7".  A brief look at the history of the 
two would show that while openEHR has always been based on sound software 
engineering and knowledge modelling principles, HL7's effort to a reference 
model to its messaging structures is not necessarily a product of organic 
development, hence th gap between version 2 and version 3.

openEHR does define a reference model with all the needed computational 
semantics but leaves domain-specific knowledge modelling to the second level 
of its methodology - archetype modelling.
Implementation efforts have clearly shown openEHR's ability to produce clean 
clean computational models that enhance domain-specific semantic 
interoperability first and foremost throught the strict enforcement of the 
separation of concerns between information modelling and knowledge 
representation. The two sides click together at run-time in a very efficient 
and effective way.

This is not refining things away in constraint models.
The fundamental difference is how one deals with modelling or how one uses 
abstraction.

"Abstraction means to strip away the superficial or incidental aspects of a 
thing and to reveal its most important aspects, or essence, aka theory, 
hypotheses. Abstraction is important and can lead to deeper (beyond surface 
understanding ) of things.
Abstraction can also go wrong. There are different ways to abstract. How do 
we test whether our abstractions (theories, hypotheses) are right or wrong?" 
(to quote a good blog: 
http://billkerr2.blogspot.com/2006/08/ascending-from-abstract-to-concrete.html).

The key to using abstraction in theoretical modelling is to understand how 
one can "ascend from the abstract to the concrete." I don't think 
constraining and refining actually play such an important role in the design 
process.

The same blog:

"Marx talked of "ascending from the abstract to the concrete" which on the 
surface can seem rather absurd.
How can a concrete view of reality be superior to a more abstract view? We 
all know that science takes us beyond the surface appearance of things and 
proposes deeper explanations that are not immediately apparent. And anyone 
who has studied child development knows that initially children view the 
world in a "concrete" way and as they become able to think better they 
become capable of thinking more abstractly. So isn't abstract thinking more 
advanced than concrete thinking?
The difficulty arises from confusing the Marxist idea of "the concrete" with 
the idea that what is concrete is that which is easily and immediately 
perceived via the sense organs - ie the surface appearance of things. (note: 
the whole idea of anything being "immediately perceived" is in any case 
incorrect. Perception always involves some cognitive processing ).
However what Marx meant by "concrete" was not what is meant by the 
pedagogical distinction between concrete and abstract (or formal) thinking.
Marx's use of the word "concrete" has to do with the notion of truth and is 
not related to the idea of concrete thinking as child-like and primitive.
To ascend from the abstract to the concrete means to move from the initial 
ability to abstract away from surface appearance (via everyday and 
relatively easy generalisations and simple concepts) toward a richer and 
more accurate view of concrete reality. This does involve abstraction - but 
it is abstraction that is on its way to a richer and more concrete 
(truthful) world view."

You don not find the richer and more truthful world view through 
constraining and refining. You have two processes at work: the first one is 
maintaining valid and verifiable data structures that exist within the type 
system of your information model; the second one deals with concrete 
instances of data and information from the clinical world that live in your 
software system, their machine processable nature is defined by computer 
semantics and platform specificity but their meaning is not.

The concrete clinical notions modelled via archetypes are thus something 
more than just a constrained pattern. These produce concrete instances, 
objects much richer in content than the abstract model whose elements they 
represent.

The trick is to accomplish this ascent seamlessly within your defined 
universe on the basis of a stable single view of that universe. Your 
knowledge model represents the domain-specific semantics your are dealing 
with and it will exist with or without an underlying computational system. 
But machine processing requires that we have a working system even when our 
knowledge acquisition process is ongoing.

To sum up - information models are based on abstractions and data types 
chosen because of their computational semantics and suitability to 
faithfully represent instances of domain-specific business objects. The 
latter derive their meaning from the theoretical (clinical) body of 
knowledge and the only information processing constraint is that they be in 
a machine computable form, that is, they do not break information model and 
the sharing of data between system.

Archetypes represent concrete reality as machine processable entities that 
co-exist in the same knowledge universe. This universe is open to future 
study and representation. What is important in openEHR's view of the world 
is to abide by the open/closed principle which means that our models should 
be closed in a sense that they can work today while leaving open the system 
boundaries for future extension (sic!).

>
> We can generate [class models|schemas|wire formats] from
> constraint patterns. Doing so has benefits and costs. The
> same benefits and costs in either HL7 or OpenEHR. And one
> of the clear costs relates to persistence. I think we need
> to search for a better way, but that's not going to happen
> in the short term.

Persistence is orthogonal to design and modelling. Persistence closure 
requires that all object references be stored and then reproduced when 
necessary (it doesn't need to know about actual object creation). It also 
involves efficient query mechanisms but again, we have two sides of the same 
story - generating class models, schemas, wire formats is a case of clean 
computational semantics while creating, storing and quering of knowledge 
structures (archetypes) has to do with domain-specific knowledge. I cannot 
comment on the costs of persistence but recent trends in database theory 
point in the direction of knowledge-aware persistence mechanisms.

> But the OpenEHR & HL7 reference models are quite different.
> In most parts, the HL7 reference model is more abstract
> (Which is way Act gets 22 attributes). So harmonising between
> the reference models is going to require actual change rather
> than adroitly altering perspectives, as with data types and
> the constraint model things.

I don't think that one can discuss whose model is more abstract (or bigger). 
What interests me is whether the two models have the right abstractions for 
the task, and whether principles of information and knowledge modelling have 
been put to good use. Actual change should contribute to the goal of greater 
expressiveness and focused coverage rather than to the elusive goal of 
harmonisation.

> This will be hard, and painful. And it must involve compromise,
> so this is when we find out who really values collaboration.

Blood, sweat and tears? In the name of what?  What do you mean by 
compromise?
What is the nature of compromise in scientific research? This is a term that 
belongs in areas such as politics and conflict resolution. In research, the 
middle ground, as Goethe would say, is where conflict begins (my apologies 
for the imprecise quote).

Who really values collaboration? Again, strong words, Grahame.
What is the nature, program and goals of such collaboration? Is it the 
creation of an open Electronic Health Record based on computational and 
domain-specific semantic interoperability?
What is the mechanism of this collaboration? I'd be interested to see the 
specific research program and see how emerging joint efforts lead to 
uncompromising collaboration in the name of a clearly defined final goal.


> And we don't want to take away the real benefits that OpenEHR
> has in the process (same for HL7)

I agree,

Ogi Pishev 

Reply via email to