Hi there,
I need to check out the source code to for details, but at the moment I am
using the combinations of cardinality, occurance and existence to figure out
the semantics of certain choice operations.
For example, when a clinician wants a field with zero or up to N options,
having these in  AOM is the only way I can understand it. In UI this
corresponds to a field with a label (Abdominal exam results) and a couple of
checkboxes, where all or none of them or a combination of them may be
selected. I am not sure I can clearly see the logic behind this choice
though.
If I get it right, it is "now that we have reference model checking, we
don't need to express these in AOM, since expressing a constraint when it
actually does not constraint anyting is not nice". Am I correct?
Now in general I'd like to have RM model checking as an option that I can
resort to when I want to. If RM model checking becomes a necessary piece of
the way AOM is used, does not this introduce a spillover effect for those
who want to use ADL + AOM in other RMs? At the moment ADL+ AOM is quite
disconnected from RM, kind of a late binding type, but if RM checking
replaces some of the things handled by AOM now, even if we get some sort of
type safety, I feel this may make some things harder for us in the feature.
(I hope the type safety, late binding early binding example is right)
A solid example is: I have the intention of expressing RM class like
constructs with ADL in order to describe decision support related
constructs. With the current AOM implementation, everyting that describes
the structure is within AOM, but if some of this is shifted to RM, then in
my case, I'd also need a modified RM checker to make use of AOM.
Now, if I'm getting all of this right, I'd rather have anything related to
RM checking as an optional capability. This is not the only case we are
using a constraining attribute to express no constraints at all. Free text
in a field is defined with a constraint with allowed values of * , is it
not? I can adopt to changes here, hoping that Java parser will adopt to
them, but I'd love to have a couple of real life cases that demands these
changes in exchange for my hours that'll go for coding

Best Regards
Seref


On Fri, Jul 3, 2009 at 3:17 PM, Thomas Beale <
thomas.beale at oceaninformatics.com> wrote:

> David Moner wrote:
> > >From my point of view, to load a RM meta-model in the
> > ADL/XML/OWL/whatever parser is still an unappropiate dependance. That
> > would mean that any change on the RM, if there is any, also implies
> > changing that meta-model for the parser to correctly work.
>
> Hi David,
>
> a change in the RM won't change the meta-model, which is the 'model of
> the model'. The kind of classes I have used to express the meta-model
> you can see here:
>
> http://www.openehr.org/svn/ref_impl_eiffel/BRANCHES/specialisation/libraries/common_libs/src/basic_meta_model/
>
> If there is a change in the RM, it will of course mean a change in the
> RM schema expressed in terms of the meta-model. But that's exactly what
> we want - to know if current (and new) archetypes are going to be valid
> against the latest RM.
>
> >
> > In my mind, a dual model system implementation should support a
> > minimal working environment totally independent of the RM. That is, if
> > I receive an archetype of a RM that I don't know, I should still be
> > able to parse it and show it in a minimal form (maybe ugly and
> > redundant, but at least functional).
>
> this is difficult for two reasons. Firstly, archetypes only express
> constraints _on a reference model_, and _in addition_ to a reference
> model - they only express the additional constraints that the underlying
> RM does not provide. So in general, archetypes are a partial construct.
> Secondly, specialised archetypes have to be expressed in 'differential'
> format (the same concept as object-oriented software classes), which are
> even more partial - they depend on their specialisation parent, which in
> turn depends on the RM. Essentially, you can parse it (we do that now)
> but without the RM it is not a lot of use.
>
> At the moment, I have changed the reference parser to only use the RM
> during the first pass parsing process to detect which attributes are
> containers and which ones are not. If we decide that we don't want to do
> this, we have to go back to putting some kind of marker on container
> attributes to indicate them as being containers (what we have done
> historically). To my mind this has always been unsatisfactory, since a)
> it forces you to put a cardinality constraint in, even though you are
> not wanting to constrain the cardinality and b) it could actually be
> wrong, and this gives rise to another source of errors. Think about a) -
> how did the authoring tool know that the attribute in question was a
> container or not? It must have had access to the reference model in some
> form to know this. So we agree that the tool that builds the archetype
> in the first place must know about the RM, but not the parser? It
> doesn't seem consistent to me.
>
> > In fact, I have always thought that the parsing of C_Domain_Type is
> > not yet a well resolved problem, but that's another topic :-)
> >
> > Moreover, if we finally suppose that a meta-model of the RM is
> > available at the time of parsing, obviously it will be also available
> > at the time of working with the AOM. And then, coming back to the
> > occurrences problem, why do we need C_SINGLE_ATTRIBUTE and
> > C_MULTIPLE_ATTRIBUTE classes at the AOM? I mean, why don't just have
> > C_ATTRIBUTE class with its members attribute and an optional
> > cardinality attribute that will be interpreted as single or multiple
> > attribute through the RM meta-model?
>
> that is a fair question. In fact you are correct - it can be done
> exactly as you say - in fact that is how the reference parser has done
> it. However...I have to say that I would prefer to convert it to the
> other model, not really to follow the AOM religiously (which we should
> be doing but aren't;-) but to simplify the software. I might be wrong,
> and maybe it is already simpler as it is....
>
> >
> > Maybe a solution could be to add a new keyword such as "container" or
> > "multiple" to specify that an attribute is multivalued and that has
> > nothing to do with the cardinality. Maybe something like:
> >
> > ITEM_TREE[at0001] matches {
> >                 items container matches { ... }
> > }
> >
> > or if we want to redefine the cardinality:
> >
> > ITEM_TREE[at0001] matches {
> >                 items container cardinality matches {1..*; ordered}
> > matches { ... }
> > }
>
> I am not religiously against this, although personally I would not
> favour it. But if the community preferred to go this way, and the
> arguments seem solid enough, it could obviously be done without too much
> difficulty. I think we need to think about the question I raised above:
> how does the authoring tool reliably know whether an attribute is a
> container or not? If this is via access to the RM, then does it make
> sense to prevent the parser (and don't forget, an authoring tool much
> parse archetypes to read them in in the first place) from seeing the RM?
>
> - thomas
>
>
> _______________________________________________
> openEHR-technical mailing list
> openEHR-technical at openehr.org
> http://lists.chime.ucl.ac.uk/mailman/listinfo/openehr-technical
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20090703/ebd80344/attachment.html>

Reply via email to