[Fwd: [JIRA] Created: (SPEC-302) Translations embedded in the ADL are not efficient and should instead use 'gettext' catalogs.]
Hi All I think I have said it before but I think we need to see this managed at the publication level. CKM currently stores translations as distinct assets. This has a number of advantages: 1) Translations can be added, reviewed, accredited asynchronously for the same archetype 2) Translations can be updated independently of revisions 3) Archetypes can be downloaded with the languages required Cheers, Sam From: openehr-implementers-boun...@openehr.org [mailto:openehr-implementers-bounces at openehr.org] On Behalf Of Thomas Beale Sent: Monday, 4 May 2009 9:02 PM To: For openEHR implementation discussions Cc: For openEHR technical discussions Subject: Re: [Fwd: [JIRA] Created: (SPEC-302) Translations embedded in the ADL are not efficient and should instead use 'gettext' catalogs.] Tim Cook wrote: On Thu, 2009-04-30 at 22:03 +1000, Thomas Beale wrote: It is clearly true that with a number of translations the archetype will grow bigger, and initially (some years ago) I thought separate files might be better as well. But I really wonder if it makes any difference in the end - since, in generating the 'operational' (aka 'flat') form of an archetype that is for end use, the languages required (which might still be more than one) can be retained, and the others filtered out. I don't think gettext would deal with this properly - the idea that an artefact can have more than one language active. I can only refer you to the bazillions of applications that use gettext. Browsers and GUI widgets everywhere are designed, expecting gettext catalogs. Not using gettext means that every implementation has to develop their own filtering mechanisms; in place of reuse of proven existing technology. OR; you could choose to develop an openEHR filtering specification. Then develop browser interfaces and widget interfaces to match. but my question was: if we want an archetype to retain 2 languages, e.g. english and spanish, out of the (say) dozen available translations, can gettext be made to do that? The other good thing about the current format (which will eventually migrate to pure dADL+cADL) is that it is a direct object serialisation, and can be deserialised straight into in-memory objects (Hash tables in the case of the translations). H, sorry, I don't get the point here. Seems to me you are saying that you pull all translations into memory. Instead of letting the application decide which one it wants. well that is the default; but depending on what 'application' we are talking about, this is quite likely what is wanted - e.g. if it is an archetype design tool that also managed translations. But I take your point - we probably should make it so that dADL can ignore some parts of an input file. Anyway, I think that we need to carefully look at the requirements on this one, before leaping to a solution... Of course. That is why I suggested targeting the 2.0 version. There is a good chance that there will be knock on effects (good or bad) to the RM (AuthoredResource, et.al.) as well. I'd like to go back to a very basic question I have. What is the use of having the original language as (a specific) part of the archetype if it isn't meant to be the validation language? Seems to me that it is the expression of the original author for the construction of the archetype. Translations are a convenience for everyone else. Not sure I understand the question Tim - do you mean: is the original language used in validation? There are very few things that are linguistically dependent in the validation operation - only where regular expression constraints are usedcan't think of any others off hand. The linguistic elements of the ontology section get used on the UI of course, and in documents, but that is for humans, not computing. - thomas -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20090520/feb9e7b9/attachment.html
XSD incompatibilites with XSLT code
An HTML attachment was scrubbed... URL: http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20090520/889ff11c/attachment.html
XSD incompatibilites with XSLT code
Hello Hugh and all other readers, i still have some doubts about the right application of openEHR schema definitions. Please see my comments inline! Hugh Leslie schrieb: The TDS schema is not generic like the openEHR schema - it relates exactly to a particular use case (template) including the correct names for elements, etc. It does contain most of the constraints as well (though not all). The real value for developers is that once they have a TDS that matches some application schema, they can work in an XML environment and don't have to understand anything about openEHR to create instances of data that conform to a schema that contains things that have direct meaning from their system schema (or message). We are talking here about systems that are not natively openEHR compliant. O.K. if one is using a certain TDS in order to export data from a legacy system this would result in some kind of intermediary format. This would lower the effort needed to include legacy data and break up the transformation to real openEHR data in two steps. LEGACY - TDS-conform - openEHR-conform. The second step could be performed by a generic service that consumes the TDS-conform data and the TDS and then does the transformation into real openEHR format. It is fully sensible to me to use a template to capture the structure and semantics of legacy data with the aim to transform it to the openEHR world. Of course it is possible to create openEHR instances directly, just as it is possible to create HL7v3 or CDA instances directly and software systems will do this as well. The issue with all these things is the high degree of knowledge and expertise required to get it right. The TDS approach makes it much easier to produce instances because a lot of the abstraction is made more concrete for a particular instance and its automatically generated from the models. There is a huge amount of XML expertise and tooling out there compared with openEHR or HL7 v3. Again, do you know of tools (we use Altova at the moment, ECLIPSE tools seemed to be less advanced), that ease the handling of XML generation, mapping, transformation, validation, formatting, ...? We have been hearing that some people are saying that openEHR archetypes are just for clinicians and not for secondary use. Well this approach allows clinician approachable models to be used for any secondary use that you would like - we are generating these schemas and hence messages, documentation, data instances, queries and also software classes, directly from the models. As far as validation goes, this can occur partly at the TDS level as I mentioned as most of the constraints are there, but most importantly, as data is committed to a repository it should be validated against the original archetypes that make up the template as these are the final source of truth about the contraints on the data instance. How can this data validation against the original archetypes be done? A simple XSD-schema validation does not suffice anyway - am i right? It seems to me that without specific knowledge and deep familiarity with openEHR this can not be done. This takes me back to investigations i made concerning the generation of a user interface. A common and desirably open source set of widgets (GUI elements for data entry) would be very helpful here. Maybe these could do on the fly validation at the point of data entry - which sure would be the most user-friendly way of checking for archetype definition conformance. Hope to get comments from more people out there! brgds Demski -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20090520/2d3865d1/attachment.html