Hi Sebastian, On 19/08/2010 11:53, Ian McNicoll wrote: > PART 1 of Sebastian's latest reply to me ... > > ============================================================ > > .... My main approach and views while I'm looking over archetypes and > specifications is from a technical perspective, as I'm working as > software developer. > > About the /name /attribute issue: whether or not to use purpose() or > directly name/defining_code attributes is not so relevant, as the > concern I have is in fact that the /name/ holds the type and the > semantics of the (owner) object. I think I understand the principles > behind that (as you already explained in previous emails) and also the > RM design as well as the design aspects on CKM archetypes made by > Sergio. But when it comes to software development, if I want to get > the meaning of that object the only solution is to know the > item-structure beforehand, thus that piece of software handling > PARTY_IDENTITY will be highly dependent on the archetype itself.
well, queries will be archetype-dependent, that is the idea. > Any structural change or re-coding local terms made in those > archetypes will require change in the software. this depends on how the software is designed. If it is designed to use AQL queries, then it should not require changes, only the query might require changes. > Even worst, if I make the software rely on the DV_TEXT/value instead > of DV_CODED_TEXT/defining_code (which is a bad ideea from my > perspective) I will get problems only by changing the language of the > archetype or just by renaming the node on the template level. normally, you would use both the value and the defining_code fields of a DV_CODED_TEXT object. > > Indeed, at-code gives us language-neutral terminology, designers or > archetype owners have the flexibility to do whatever they need in an > archetype, but they will have to agree on using and sharing archetypes > and templates. Still, if there is no common ground to share and safely > transfer semantics (and what I mean here is the knowledge provided by > a specific archetype) between parties I don't see interoperability > working that smoothly. that is correct - and what these documents are about at http://www.openehr.org/wiki/display/spec/Development+and+Governance+of+Knowledge+Artefacts Under the collaboration of openEHR and IHTSDO, these will be further developed into a basis for reliable sharing and improved identification. > > Let me give you an example of what and how can go wrong: > Suppose there is hospital A will going to use CKM archetypes and > another hospital B developing their own ones. A is going to exchange > openEHR information with hospital B, one of the topic being is > PATIENTs. As A will going to use > /openEHR-DEMOGRAPHIC-PERSON.person.v1/and/ > openEHR-DEMOGRAPHIC-CLUSTER.person_identifier_iso.v1/ the 'legal > identity' type is modeled by that famous > / > > identities[//openEHR-DEMOGRAPHIC-CLUSTER.person_identifier_iso.v1//]/name/defining_code > > = 'local:at0027'/ > > B is going for other, CKM similar but not the same, in-house > developed, archetypes; for the sake of exercise the original (and > only) language will be dutch:/ > //openEHR//-DEMOGRAPHIC-PERSON.b_persoon.v1 and > //openEHR//-DEMOGRAPHIC-CLUSTER.b_persoon_identifier.v1/ and will use > the following path to test legal identity: > > /identities[//openEHR//-DEMOGRAPHIC-CLUSTER.b_persoon_identifier.v1//]/name/value > > = 'juridische identiteit'/ > > As time passes B learns that he must use internal codes to be more > efficient on handling or exchanging data so he publish a new modified > version: /openEHR//-DEMOGRAPHIC-PERSON.b_persoon.v2/ and the path becomes: > > /identities[//openEHR-DEMOGRAPHIC-CLUSTER.b_persoon_identifier.v2//]/name/defining_code > > = 'local:at0008'/ > > Notice, it is /at0008 /because it is completely different archetype, > and that was the code rightfully assigned by the archetype tool used > in the B organization. this is normal - if two organisations want to share data, they need to share archetypes concerning that data. It doesn't matter what mechanism they use, this must be the case. Doing anything that works means relying on some external (to both) or shared resource - whether it is archetypes or SNOMED or ICD or anything else. What does not need to be shared is templates - each institution can happily create as many as it wants, based on the same archetype repository. Each institution can also create its own private archetypes for data * not intended to be shared * intended to be shared one day, when the private archetypes will be shared * intended to be shared , but not processed by the receiver in an intelligent way (but display would still be possible) > > Now, as you might see already, if B wants to exchange data with A, > they will have to share archetypes because only looking to the > at-codes is not enough (/at0027 /vs /at0008/) - i.e. A must access and > read /openEHR//-DEMOGRAPHIC-PERSON.b_persoon.v2/ in order to validate > and use terminology associated with /at0008/. yes, this is because the archetype acts as the name space for the at-codes. > Suppose that is not an issue and A accomplished that reading... but > now the question is how can A understand what should be the right code > associated with legal identity. Previously, in his own domain, A used > (instructed by the application level) to look for /at0027 /but there > is no clue for him how to locate the same concept in B domain. but you are still pre-supposing that each place is developing essentially the same archetypes independently. This won't work, and isn't sustainable in terms of work anyway - it will obviously be less work to develop shared archetypes once (and of better quality) rather than develop the same thing many times. > > I guess in this case a quite good solution (as you already mentioned) > is to use term bindings to a common external terminology. It will not > enforce but probably just facilitate common semantics. Without it, the > above use-case will be quite a challenge to solved for good. Still, > adopting this solution requires agreement between A and B on using a > common external terminology service which might add some adoption > issues (maybe even scalability) on top of whole openEHR solution. for many reasons this is also useful. But if A & B really don't want to cooperate, or share, or agree on anything, it is kind of hard to help them! > > Another solution, which I was referring on in my previous email is to > use openehr terminology directly instead of local at-codes. This will > be pretty similar (at least from my point of view) to the way the way > Entry package is regulated (see for instance ISM_TRANSITION class > attributes), the only difference being that over there specifications > (instead of archetypes) are 'enforcing' terminology. Beside this, like > you already mentioned, it will also allow re-use of some terms accross > demographic archetypes. On the other hand, can you tell me how come > there is openEHR terminology for /instructions, audit change types, > composition categories/, etc instead of coding those terms locally on > the archetype level? The openEHR terminology is used for attribute codes in the model where it is thought that the possible set of codes is universally applicable and unlikely to change, or only very slowly. Any coded attribute that has a value set that could be contextually dependent is going to require a more dynamic and sophisticated approach to terminology. At the moment this is done in archetypes, because there is nowhere else to do it. However, the future approach may be that it could be done in SNOMED national extensions, or lower level extensions of SNOMED. - thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20100820/6783d4d8/attachment.html>

