2009/7/9 Thomas Beale <thomas.beale at oceaninformatics.com>:
> Seref Arikan wrote:
>
> Hi there,
> I need to check out the source code to for details, but at the moment I am
> using the combinations of cardinality, occurance and existence to figure out
> the semantics of certain choice operations.
> For example, when a clinician wants a field with zero or up to N options,
> having these in? AOM is the only way I can understand it. In UI this
> corresponds to a field with a label (Abdominal exam results) and a couple of
> checkboxes, where all or none of them or a combination of them may be
> selected. I am not sure I can clearly see the logic behind this choice
> though.
> If I get it right, it is "now that we have reference model checking, we
> don't need to express these in AOM, since expressing a constraint when it
> actually does not constraint anyting is not nice". Am I correct?
>
> yes, that's correct. Currently (ADL 1.4), we have existence, cardinality and
> occurrences as either mandatory, or with a default, so that they are always
> in the AOM parse tree, even if not set in the archetype. But the overall
> semantics of archetypes are 'additional constraints on an underlying model';
> it doesn't make sense to have any of these multiplicity-related constraints
> either mandatory, or to have defaults for them. What we should have is each
> archetype containing only constraint 'overlays' on the 'next model down',
> which for unspecialised archetypes is the RM, and for specialised archetypes
> is the 'stack' of specialisation parents + the reference model - in much the
> same way as object-oriented class definitions work.
>
> Now in general I'd like to have RM model checking as an option that I can
> resort to when I want to. If RM model checking becomes a necessary piece of
> the way AOM is used, does not this introduce a spillover effect for those
> who want to use ADL + AOM in other RMs?
>
> no, it just means that they should give the reference model checking
> component a different RM schema to use. The openEHR schema I have used (this
> is not standardised!) is here -
> http://www.openehr.org/svn/ref_impl_eiffel/BRANCHES/specialisation/apps/adl_workbench/app/rm_schema.dadl
> (see
> http://www.openehr.org/wiki/display/dev/Machine-readable+model+representations+of+openEHR
> for explanations); clearly the same kind of schema could be defined for ISO
> 13606 or any other object model (I can't say for sure about HL7v3 RMIM
> artefacts, because they are a custom construct, but an approximation could
> now doubt be made).
>
> At the moment ADL+ AOM is quite disconnected from RM, kind of a late binding
> type, but if RM checking replaces some of the things handled by AOM now,
> even if we get some sort of type safety, I feel this may make some things
> harder for us in the feature. (I hope the type safety, late binding early
> binding example is right)
>
> I would suggest that this thinking is seeing archetypes as a replacement for
> an information model, rather than a constraint facility for it.
>
> A solid example is: I have the intention of expressing RM class like
> constructs with ADL in order to describe decision support related
> constructs. With the current AOM implementation, everyting that describes
> the structure is within AOM,
>
> but is it? There are many attributes that most archetypes never mention,
> e.g. ENTRY.subject is rarely archetyped; COMPOSITION.context is only
> sometimes archetyped; there are many other such attributes whose values
> can't sensibly be constrained before runtime.
>
> but if some of this is shifted to RM, then in my case, I'd also need a
> modified RM checker to make use of AOM.
> Now, if I'm getting all of this right, I'd rather have anything related to
> RM checking as an optional capability.
>
> The problem with this is that you can't check the validity of the archetypes
> in the first place (and don't forget, the system's job is to create RM
> instances ultimately - the archetypes are just a guide).
>
> Now, you might argue that your system consumes only archetypes that are
> assumed to have been checked already (see the diagram
> http://www.openehr.org/wiki/display/spec/Development+and+Governance+of+Knowledge+Artefacts
> for an idea of the overall system in which this might occur). This is fair
> enough - it just means that another part of your system has already done the
> checking. The next thing you might say is: ok, but now if cardinality is no
> longer mandatory, my components can't tell which attributes are container
> types and which are not. This would be true if your components were working
> from source artefacts only. In the near future however we need to move to a
> situation where the 'compilation' operation produces operational i.e.
> runtime-ready artefacts, which are:
>
> inheritance-flattened
> represented as an object serialisation of the AOM, e.g. in XML, dADL or
> whatever. They could be serialised still in ADL, but this is not the most
> useful format, because it doesn't carry all the information - it needs the
> RM to be present.
>
> This is what happens in object-oriented programming environments, although
> most of us never see it. People who know about C++ vtables, Eiffel
> 'flat-forms' and whatever the equivalents are in Java and C# are will know
> about this.
>
> Admittedly we are not quite there today (I don't know what the progress in
> the Java ADL compiler for supporting either RM checking or flattening is),
> so the question is: do we add a new keyword to ADL like David Moner
> suggested to indicate that an attribute is a container? As a long term idea,
> I don't really like it, but as a short term fix, it would probably not be
> that hard to do (obviously easy for the parser, but the question of what
> effect on current archetypes is more difficult to answer).
>
> This is not the only case we are using a constraining attribute to express
> no constraints at all. Free text in a field is defined with a constraint
> with allowed values of * , is it not? I can adopt to changes here, hoping
> that Java parser will adopt to them, but I'd love to have a couple of real
> life cases that demands these changes in exchange for my hours that'll go
> for coding
>
> the '*' constraint means 'anything goes', i.e. no constraint. But normally
> there is no reason to mention an attribute at all, if there is no constraint
> to put on it.
>
> It would be useful to know what state the Java ADL parser is in....

Hi all,

Sorry for being late on this.

Currently the Java ADL parser only does the parsing and produces the
AOM objects from ADL (pure parsing). RM based AOM validation is done
in a separate component "archetype-validator", which is now being used
in the CKM (thanks to Sebastian =)

The archetype-validator has access to the RM classes definitions
through a RMInspector class using Java reflection. More specifically,
the annotations in the Java implementation of RM classes are extracted
and used for RM related validations. The same RM inspection mechanism
is also used for other purpose in the rest of the Java implementation,
e.g. RM data binding between dADL, XML and in-memory RM objects. The
use of Java classes as the computer-readable form of RM definitions is
to minimize the maintenance since the Java classes are supposed to be
up-to-date according to the specs.

However, the platform-independent RM definition in dADL proposed by
Tom seems very attractive to me. It could be the single source of the
RM definitions for all major implementations. It clearly has
advantages against platform-specific solutions. Besides, dADL based RM
representation could be used to represent different versions of RM
releases to enable co-existence of different RM versions in the same
runtime system.

Regarding the question raised by Tom and David, I would prefer a
generic C_ATTRIBUTE with an optional cardinality attribute. So the
parser will only fill the list and cardinality when necessary without
really checking or guessing if it's a single or multiple attribute
it's dealing with. The necessary validation will only be done in late
stage when the RM definitions is available. The correct use of
C_ATTRIBUTE constraint will reply on the knowledge of underlying RM
classes, which should always be available to the authoring environment
and the EHR runtime systems.

This does differ from the current design that requires the parser to
decide if the attribute is single-valued or multiple-valued, which in
turn requires an extra keyword (e.g. "container") or mandatory
"cardinality" keyword. In my view, such keyword isn't really necessary
for parsing and doesn't really add any value in this stage.

Cheers,
Rong




>
> - thomas beale
>
>
> _______________________________________________
> openEHR-technical mailing list
> openEHR-technical at openehr.org
> http://lists.chime.ucl.ac.uk/mailman/listinfo/openehr-technical
>
>


Reply via email to