This was a great post Gunther! You did a lot to compare and contrast
the approaches. I did not understand some of the ramifications of what
you said, so I will ask some questions with some in-line reminders. As
I understand it, your main contrast is that HL7V3 has specifics and GEHR
does not, rather relying on templates:
Gunther Schadow wrote:
>
> In HL7 we put fundamental classes and
> their attributes into the model. For example, a medication is
> described with a fixed set of attributes for dose, form, route, etc.
> Our information model standardizes these things, GEHR requires
> templates for it.
>
I can understand your point of view because you take the HL7V3
information model (with all it's specificity) to be the best that there
is and the only one worth working with. It turns out that other people
want somewhat different specificity. that's why GEHR uses templates,
think of it as a layer of abstraction or complexity if you prefer, but
it is configurable. For what it's worth, I have done database modeling
for decades and I have seen every model change and evolve under what I
call the crucible of implementation and practice. I have been hoping
that things like XML, terminology services and simple expressive models
like GEHR would allow different expressions from a common core.
Otherwise we are left with the conundrum that each designer does her own
thing from start to end and never really gets anywhere substantial and
open source is not possible.
>
> Then, in HL7 we have the principle workflow constructs which we
> can use to fully standardize the definition of both guidelines
> and care plans.
>
Are these workflow constructs a complete workflow specification? I ask
because a tremendous amount of work has been going into workflow
standards between the Workflow Coalition (WfMC) and OMG, which result in
a workflow API that allows any object to be workflow enabled and managed
by multiple workflow implementations simultaneously.
>
>
> To be fair however, I would be careful with maintaining that GEHR
> is more comprehensive if it just lacks specificity. This brings up
> my magic triangle again:
>
> simplicity
>
> generality relevance
> (& evolvability) ("meaningful-ness")
>
> Every two of these may go in opposition to the third and it is a human
> art to trade off between all of these.
>
I would think that every one of these can move away from (or towards)
the other, but I am in full agreement that understanding the nature of
the trade-offs is essential. Moving towards is paradigm shift in the
sense of Kuhn's work. Using Kuhn's example of how elaborate physics had
become in the latter part of the 19th century to accommodate the known
phenomena and how relativity reset the constructs back to a simple but
more expressive state, from which we have now evolved into even greater
elaboration while everyone looks for the GUT (grand unified theory).
Could something similar be happening in information management? To
show my age, my practical work pre-dates relational databases so I fully
participated in the movement away from complex and unwieldy network
databases to the simpler elegance of relational systems and was a
follower of Codd's theory that relational theory was provably complete.
But practical implementations soon got nasty, distribution never really
worked and objects came along and raised the complexity of our
structures yet again. OODBMS failed to relieve the problems. So from
my view, we are ready for another paradigm shift and maybe that makes me
jump too quickly on things that seem simple and elegant and resist large
scale, complex models of the problem domain.
My point about the internet and open source was that complexity which
evolves according to practical experience is the preferred path over
designed in complexity at the start. Even if you throw away the initial
design completely in favor of a more complex one, you now have a real
rationale for making the trade-offs.
S/MIME Cryptographic Signature