To put some numbers on things... in a 2012 snapshot of the openEHR.org 
CKM archetypes there are:

  * 267 compiling (i.e. technically valid archetypes)
      o including 94 specialised ones
  * In these archetypes there are:
      o 3208 'archetypable' nodes (i.e. LOCATABLE nodes)
      o of which 2163 level level nodes with DATA_VALUE objects.

If we concentrate on the leaf level nodes, we can think of 2163 
re-usable data points / groups for general medicine. That's not nothing.

The downside:

  * the quality is variable (due to insufficient modelling work)
  * the coverage of medicine is patchy. Some areas are heavily covered,
    others with almost nothing.

Nevertheless... these archetypes are /commonly re-used /in local 
deployment situations, including some of the ones mentioned here 
<http://www.openehr.org/who_is_using_openehr/healthcare_providers_and_authorities>.
 
Re-used usually means that:

  * they were either used as is, or further specialised in order to add
    or modify some data points / groups
  * used by locally built templates, to create data set definitions that
    are actually used in systems.
  * they were also used by at least one major national programme
    (Australia) as a basis for production health information definitions
    for national use. Some of these archetypes will be re-incorporated
    into openEHR.org.
  * 30 demographic archetypes were provided by a Brazilian research group
  * numerous archetypes have had translations added by various health
    professionals and research groups.

With all the limitations implied in the above (and given the relative 
lack of endorsement by official bodies, who prefer largely 
hard-to-implement 'official' standards), I don't think this can be 
claimed to be a failure.

As I said before, although there are a lot of things that can be 
improved (e.g. reference model simplifications, ADL/AOM 1.5 etc), there 
has been no thought of getting rid of or substantially changing:

  * the basic 3 levels of reference model, archetypes (the re-usable
    library of domain definitions) and templates (the usually locally
    produced data set definitions)
  * the ability to specialise within these levels, i.e. use an
    inheritance relationship
  * the ability to connect by association one model with other(s)

Indeed, the direction of development is to strengthen all of these. If 
you consider each level of inheritance (ignore the RM) as a 'level', 
this is what I would call 'multi-level' modelling. From the discussions 
to far, I think the MLHIM aproach, is essentially a method of defining 
XSD document definitions as constrained versions of an XSD-expressed 
base information model. As Tim explained, there is no specialisation, 
nor any distinction between the library (archetypes) and data-set 
(template) levels. MLHIM may be easier to implement in the short term. 
However, I think the capability for scaling (implementing numerous new 
data-sets but with diminishing effort due to a greater library level) 
and re-use will be relatively limited in the medium to long term. I also 
think the ability to generate different kinds of artefacts from the 
underlying definitions - e.g. UI data capture screen definitions, UI 
display definitions, PDF definitions, WSDL and so on, will be relatively 
limited.

It may be that the task we set in openEHR is too ambitious! Anyway, this 
is the world as I see it...

- thomas

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130408/bd0b192f/attachment.html>

Reply via email to