Hi Thomas, I think you are missing the point here with the translation. Of course English Englisch, Scottish English, US English and Australian English differ.
What I talk about is exactly that the concepts and language and terms and symbols used e.g. in English English, need and can be addressed at clinical level, where clinicans contribute content for modeling. However, I should not read a different message in English English language if I use an iPad, a Nokia phone, a printed newspaper, a pencil written note or a Windows PC, when the source material is the same intended concept and description. ( I ignore mark up influences here). And that is exactly what is going wrong here: the technical representations create difficulties because they do not sufficiently address clinical precision. In my experiences HL7 matches up to 98% of ability to represent clinical concepts and behavior and OpenEHR archetypes have several limits, making it maximum of 50% of representing clinical concept characteristics. In particular in OpenEHR it is too hard to have relationships with coding systems, e.g. Snomed CT (HL7 code attribute and OID systems is almost perfect and widely implemented for this), I have seen no proper archetype allowing to do the same. It is difficult to express relationships between data elements in archetypes, e.g. which data elements are organized in what structure with each other (such as the HL7 component relationship) and e.g. define the algorithm to create a sum score (such as the HL7 attribute of derivation method). The option to model workflow or behaviour of concepts (e.g. the HL7 mood code for an observation) has no equivalent in OpenEHR archetypes. These are all options of proper UML modeling and applied in HL7 v3, perhaps is this explaining why OpenEHR does not like UML, does it reveal the weaknesses of archetyping too much? Vriendelijke groet, William Goossen directeur Results 4 Care -----Original Message----- From: Thomas Beale <[email protected]> To: openehr-technical at openehr.org Sent: Fri, Nov 26, 2010 10:11 am Subject: Re: HL7 modelling approach William, Not to continue on the main debate - but with respect to your statement: "I am still of the opinion that the semantic content of this R-MIM is 100% the same as the same content in an archetype. " It might be the case. But it is a problem like trying to express the same thoughts in Malay language and in Russian. In the former there are no tenses, and you have to add words indicating time, like 'yesterday' to do it. Russian is a richer language, so expressing sophisticated concepts in Russian a) takes less words and b) is likely to more closely express subtle concepts, tone of expression etc. In our world, we have to be able to machine translate models to say they are the same, or reuse one expression in the other world. If we can't do that, it is of only academic interest to say they are 'the same'. And the difficulty of doing machine translation on natural languages shows us how hard this is. Even across European languages, Google translate and other such tools are quite weak. Model translation as practised in IT does work somewhat in some circumstances - but is fairly unreliable (e.g. UML tool round-tripping). What enables or hinders such translation is the closeness or otherwise of the underlying grammatical structures; in our world, this is the reference models. - thomas On 25/11/2010 22:13, Williamtfgoossen at cs.com wrote: Just an example of downstream modeling designs in HL7 to counterbalance Thomas erreonous comments on HL7 modeling. 1. The path to get to implementable HL7 v3 XML is quite clear: we have the RIM as building blocks offering structure, attributes, and behaviour. Pure UML modeling. 2. From the LEGO bricks using constraining, we create LEGO guidelines (similar to the booklets that help in a step by step picture to put the right bricks together). This is the Domain message inf. model. Each of these D-MIM is taking into account a multitude of use cases identified and defined by clinicians, or from projects with clinicians. 3. Following the guideline booklet, actual messages are built, the R-MIMs 4. The object oriented models in R-MIM are serialised into XML. OK, that is the basic pattern. I have used this to create the about 100 R-MIM examples covering a lot of clinical content on the level that in the 13606 world would be an entry. I am still of the opinion that the semantic content of this R-MIM is 100% the same as the same content in an archetype. Well, I came to HL7 int with this and got 2 very important comments from experts in different committees: 1. That is an excellent example of representing clinical content in HL7 v3 specification, it does follow the steps and rules 1-4 above. So the modeling did fit. 2. If we continue to work this way and downstream have to make R-MIMs and serialize them we face the combinatorial explosion. If you have 3000 entry level or data element / data element clusters and 1000 assessment scales and want to vary that in the messages: well we face 3000 x 1000 x 5? variations. That is not sustainable. Hence we abandonned to create fully specified R-MIMs for each clinical artifact and changed the route to what now is DCM. DCM allow still at conceptual level to express what clinicians want and need, but downstream only deliver XML output with structure and coding etc in such a format that it can be inserted in one and the same base message. Hence the combinatorial explosion is 1 for instance for the care record which holds the clinical statement pattern allowing such serialised DCM content to be included. Hence, the downstream implementation in HL7 guided in this example the modeling approach. _______________________________________________ openEHR-technical mailing list openEHR-technical at openehr.org http://lists.chime.ucl.ac.uk/mailman/listinfo/openehr-technical -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20101126/e1590061/attachment.html>

