On 15/02/2016 10:25, Sebastian Garde wrote:

We have been through this a long time ago I think, with Koray having the exact question and opinion I had.

The downside if you don't allow this kind of constraint(!) on functional attributes in archetypes, here you cannot constrain the other two (real) attributes when modelling an archetype either because they depend on the actual time when documenting data and thus you don’t really have a way of constraining it at all.


yep - that's why it's there.

How to actually handle this generically when you receive actual attribute values that are approximately correct, but not - say – to the second, seems problematic though as Heath has just said.


in theory yes, and if the event values will be set from system clock values or device-generated values, this is correct. However, if they will be entered by a human from a drop-down choice or similar, then if the choices are '1 min', '2 mins', ... '10 mins', the events will clearly have exact and valid time offsets according to the constraint.

You can hardly reject an APGAR 5 min score because it was documented to be taken after 5 min and 2 seconds (who knows it that exact anyway!).

In other archetypes, a difference of a few seconds may of course be very significant.

Maybe all this is an indication that (some) fixed events like the ones in the APGAR archetype should be modelled differently - e.g. a repeated Cluster with an explicit time element (or a coded text with its values tied to the respective Snomed codes, something like this (even if it seems less elegant). And then avoid constraining the offset.


I don't think so - I think the current way of doing it is clinically very clear in the case of Apgar and any other situation in which the offsets are somewhat notional, and the times will be entered by the clinician (disregarding actual seconds differences).

Historical note: the current way of doing this was the one requested by clinical people some years ago (I argued against it).

But I don't see a general problem with constraints on computed properties - to evaluate them you need functions to operate on the data, i.e. just a normal implementation. If someone wants to try and evaluate validity of data against an archetype using only a data view, e.g. in XML, then I think functional properties mentioned in archetypes will need e.g. Xquery or schematron or some other kind of evaluatable expressions that can be attached to the schema.

Our early conceptions of how to do data/archetype validity checking always involved an 'archetype kernel' concept, i.e. a materialised object graph and archetype graph in the same compute space. In that approach you have all the functions of the relevant object types, and validating with function-based constraints is easy. There may be better conceptual approaches today.

- thomas

To me it is not too helpful to formally constrain the offset without also _formally_ defining what the base line (origin) is (=the time of birth). This is just indicated in the purpose of the archetype.

Since you cannot really easily do this, I don’t see much value in modelling this by constraining the offset. And there aren’t many other example where the offset is constrained in archetypes I have seen. Defining the precedence of time and offset would be another way as Koray says.

By the way, EVENT/Offset is actually not the only functional attribute that I have seen constrained:

·is_integral for a DV_PROPORTION or

·type for a PARTY_RELATIONSHIP (here type==name, which makes it a bit easier)

are others, but they are probably easier to manage than the offset.

We used to have a check in CKM to at least inform about these “commonly constrained functional properties” as we called them, but took it out, because it was too confusing.


_______________________________________________
openEHR-technical mailing list
[email protected]
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org

Reply via email to