On 25/04/2013 11:47, Diego Bosc? wrote:
> As you know, I'm not a big fan of domain types, so take my comments
> with a grain of salt ;)
> I understand that back in the day when archetypes were hand crafted
> domain types could serve a purpose. But in my opinion ADL should not
> be written by hand nowadays. Tools should be the ones that 'hide' the
> 'verboseness' and provide the user with a simple interface to simulate
> domain types if you want/need that. Also, the difference in file size
> is negligible (if archetypes pass from 16kb to 20kb I wouldn't worry
> that much...).
> If you ask me I would get rid of them completely and make ADL
> completely model agnostic.
I'm not worried about size, up to a point. But there some truisms about
formalisms - the main one is that if the context free grammar of a
formalism only has a complicated way to do something, then any
structural representation will also be complicated. Additionally, if
it's complicated to express a simple thing in the formalism, it's
probable that no users or developers understand it clearly.
Consider these dichotomies:
* OWL (readable) v RDF (hideous)
* JSON (simplistic, but readable) v XML (hard to read, tricky
inheritance model, tricky containment semantics, ...)
* Ruby / Python (readable, according the young generation at least ;-)
v C++ (much harder than it should be)
* Ecore syntax (human readable and computable) v XMI (no need to say
anything here).
One thing we can learn from this is that where clear abstract syntaxes
are not found, there you find confusion.
>
> This is why I can agree with the second point completely: There you
> are making ADL better, more powerful.
> But I see a problem with the first point as it still requires an
> external definition of the 'mappings' between how we understand codes
> in each one of the standards (and which information we can constraint
> about them).
well that's true, but it's already true for types like Date, Time,
DateTime and Duration. Note that a Datetime with timezone has 7 pieces
of information in it, and a lot of implied validity rules. Is it a leaf
type or a complex type? We just use ISO8601 strings for all of these,
and let other tools work out the obvious mappings between various RMs
with TS (HL7), DATE/TIME types (openEHR), XSD gXXX types (FHIR), and so on.
Proposing the idea of a 'terminology code' made up of a terminology id
and a code or code-phrase (as a string expressed in e.g. the SNOMED CT
Compositional grammar) as a built-in type doesn't seem a great leap in
the semantic age.
> With this new syntax, can we constraint mappings between
> codes? ( How do I say that I don't want to allow the mappings in
> certain coded text?)
I'm not exactly sure what constraint you want to express here: can you
be more precise?
> and what about the code qualifiers? What if your
> RM defines another kind of attribute for codes interesting to be put
> into the archetypes but not supported by this code syntax?
> If both visions (codes as a type and codes as a full structure)
> coexist then we have the same problem as we have now (or worse).
well in openEHR we have always modelled code terms as syntax, not a
complex model of qualifiers. See the CODE_PHRASE type.
>
>
> PS: BTW, by definition a leaf constraint type (the new proposed
> 'C_TERMINOLOGY_CODE' or whatever) does not have node id, I don't see
> how one would be able to define alternatives of codes from different
> terminologies or specialize that...
> PPS: ...which is the exact same problem that domain types have (as
> they also lack node id)
it depends on what are trying to do. The 'normal' thing that 90% of
archetypes need to do is this:
ELEMENT[at0021] occurrences matches {0..1}
matches { -- Certainty
value matches {
DV_CODED_TEXT matches {
defining_code matches {
[local::
at0022, -- Suspected
at0023, -- Probable
at0024] -- Confirmed
}
}
}
}
If this is written without the 'custom syntax' then you have:
ELEMENT[at0021] occurrences matches {0..1}
matches { -- Certainty
value matches {
DV_CODED_TEXT matches {
defining_code matches {
CODE_PHRASE matches {
terminology matches {
TERMINOLOGY_ID
matches {
value matches
{"local"}
}
code_string matches {
"at0022", --
Suspected
"at0023", --
Probable
"at0024" --
Confirmed
}
}
}
}
}
}
Either way, there is no at-codes on the possibilities, and there's no need.
Of course, you can always do this:
ELEMENT[at0021] occurrences matches {0..1}
matches { -- Certainty
value matches {
DV_CODED_TEXT [atNNNN] matches {
defining_code matches {
...
}
}
DV_CODED_TEXT [atNNNN] matches {
defining_code matches {
...
}
}
}
}
which might possibly make sense in some situation.
- thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130425/e54e4833/attachment.html>