We need to be clear on what we mean by 'RM' here. Colloquially, 'RM' in openEHR has meant 'the information model' (of the EHR and demographics). Now, we create models for everything we want to compute with, otherwise we get instances with no model, and that is not useful.

Hence, we also have the 'AM' - Archetype Model, that consists of ADL, the AOM (the actual model of archetypes), and a few other bits and pieces. We also had for a long time the notion of a 'SM', or Service Model - a formal model of services. We have only just started to defined that last part.

In more recent times it became clear that we needed to have a set of common models of elements that are re-used across the other models - basic things like primitive data types, identifiers, the notion of a 'resource' (a thing that has a lot of authoring and languages meta-data); also generic languages like ODIN, BMM, and so on. So we defined another component called 'BASE' to contain those things.

We also create components for

 * Querying ('QUERY') - to contain models / languages of querying (so
   far, AQL)
 * Process ('PROC') - to contain models of clinical process - so far,
   the Task Planning model
 * Clinical decision support ('CDS') - to contain models of CDS elements

These components are illustrated on the specs governance page <https://www.openehr.org/programs/specification/governance>.

So, the kind of proposal we are discussing in this thread is whether it could make sense to try to formalise in an open, standard way, the definition of how data elements and groups are mapped to presentation data sets and groups, where the latter are understandable as e.g. logical visual components, document components, message components etc.

To do this, there could be a new openEHR model component that would contain a few classes, or maybe some language definition, or some other formal specification artefact, that would enable the data <=> presentation elements relationship to be expressed for any particular case, e.g. a screen form. It might be called 'PRES', for obvious reasons.

If it were done as a model, we might imagine some classes like DATA_SET, DATA_GROUP, and so on, as proposed in my first rough post. It could mean more classes and/or attributes, e.g. it might make sense to have an attribute DATA_GROUP.can_repeat, or TEXT_FIELD or whatever. Possibly elements that represent 'selectors' like menu items, tabs etc need classes as well, or maybe they can be specified by generic meta-data, e.g.

"presentation_characteristics": [
    "visible": True;
    "can_repeat": True;
    "etc": "etc";
]

and so on. If it is done like this, then there is another model somewhere else of the above structure - most likely outside of openEHR. On the other hand, it may be that all the externally defined models are concrete and product-specific, i.e. things like React, Angular, Node and so on, and underlying widget primitives. In that case maybe we want a more generic model of visualisation elements such as would be found in XUL, i.e. things like the following (from the XUL wikipedia page <https://en.wikipedia.org/wiki/XUL>):

Top-level elements
   window <https://en.wikipedia.org/wiki/Window_%28computing%29>,
   page,dialog <https://en.wikipedia.org/wiki/Dialog_box>,wizard
   <https://en.wikipedia.org/wiki/Wizard_%28software%29>, etc.
Widgets
   label,button
   <https://en.wikipedia.org/wiki/Button_%28computing%29>,text box
   <https://en.wikipedia.org/wiki/Text_box>, list box,combo box
   <https://en.wikipedia.org/wiki/Combo_box>,radio button
   <https://en.wikipedia.org/wiki/Radio_button>,check box
   <https://en.wikipedia.org/wiki/Check_box>,tree
   <https://en.wikipedia.org/wiki/Tree_view>,menu
   <https://en.wikipedia.org/wiki/Menu_%28computing%29>,toolbar
   <https://en.wikipedia.org/wiki/Toolbar>, group box,tab box
   <https://en.wikipedia.org/wiki/Tab_%28GUI%29>, colorpicker, spacer,
   splitter, etc.
Box model
   box, grid, stack, deck, etc.
Events and scripts
   script, command, key, broadcaster, observer, etc.
Data source
   template, rule, etc.

It is clear that if there were a model that contained very basic classes like WINDOW, LABEL, TEXT_BOX, and so on, that archetypes of such classes would in fact just be definitions of forms and parts of forms. (A similar argument applies if you are trying to build Documents or Messages).

If we define such a model so that the contents of various visual elements can be openEHR data elements as well, i.e. OBSERVATION, etc, and parts of such elements, then we have a way of using archetypes and templates to define a whole visual interface, and the exsting archetype tools and libraries can be used to a number of interesting things.

Such a template would be processed into a concrete form, targetted to a toolkit of choice, e.g. Angular or whatever, including non-web app technologies such as native Android, old-school heavy client and so on.

At the moment, for clinical users / informatics specialists trying to define such applications, as far as I am aware, they have to follow a path like Tony Shannon described, or something like the Marand tools. These tools are all very nice, but where they allow easy designing of the application, they are not openly standardised; and/or they force the designer into something intermediate, e.g. PulseTile, as Tony mentioned.

PulseTile looks pretty close to the high-level library of visual elements that I am thinking of. So we could imagine creating a model in openEHR that is just a mimic of the PulseTile elements, defined as classes with the minimum possible attributes that are required to configure them - this would be almost trivial to do. Then we can build archetypes and templates of that, but including the data elements, AQL queries and other semantic items where we want. Generic tools like ADL workbench and LinkEHR could do this right now (both would be pretty ugly experiences, but as a bootstrap step, it would have some value).

Without this, what we have to do is start some tool, say a PulseTile studio (does it exist?), that allows you to create the presentation forms you want, and then somehow you have to add in the pointers to the various data elements - usually by populating some fields in a generated XML or JSON file. The latter is not that hard to do - the Marand and most likely some other tools allow it to be done by drag and drop, with a template on the right hand side of the screen as a data source.

Solutions like this are pretty close to where we want to be, but they don't achieve (as far as I can see):

 * re-usability of presentation templates across different visual /
   presentation technologies
 * full model-based computability of presentation definitions - because
   there is no combined formal model of presentation and semantic data)
   elements
 * integration of AQL, openEHR data elements from different parts of
   the RM (EHR, demographic), and even different components - why not
   Task Planning elements from the PROC component?
 * non-openEHR data items that come from external sources
 * a way of defining screen work-flow, as it relates to semantics of
   data, e.g. one choice in a drop-down causes this form to pop up,
   another causes that one to pop up.
 * a single IDE tool experience for clinical application designers that
   enables building of apps from:
     o a standard library of logical primitives (something like the
       Pulse Tile elements)
     o a standard library of semantic components - essentially,
       anything that either 'is' data (RM, PROC, CDS elements) or
       generates data - i.e. AQL queries,
     o a standard way of specifying opaque data sources, e.g. the
       result of doing a drug interaction check
 * solving the problem that COMPOSITION templates do not represent
   display / retrieval forms, only data capture forms.

I hope that we could use the knowledge of UI/UX experts here to create a way of filling these gaps. I'd like to have a tool in the future that was like building an application mockup with a tool like Balsamiq, that built the real thing instead.

As usual, feel free to take the idea apart, the above is just to clarify my thoughts so at least others know what to critique.

- thomas



On 19/02/2018 06:29, Bert Verhees wrote:
On 19-02-18 10:21, Diego Boscá wrote:
Personally I would prefer if no visualization attributes ended up in the EHR RM, but I'm fine if we want to create an additional RM to handle visualization (and we create templates of that model that point to the EHR model)

I agree on this!

(again sloppy English in my previous message, this time changing the meaning, I changed the quote below, I am awake now, will not happen again, excuse me)


2018-02-19 10:15 GMT+01:00 Bert Verhees <[email protected] <mailto:[email protected]>>:

    On 18-02-18 23:09, GF wrote:

        Is it an idea to annotate nodes with instructions for display.


    Personally I think having special templates/archetypes for
    display is better. Templates are create*d* per purpose, and
    mixing purposes in a single template does *no**t* seem good idea
    to me




_______________________________________________
openEHR-technical mailing list
[email protected]
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org

--
Thomas Beale
Principal, Ars Semantica <http://www.arssemantica.com>
Consultant, ABD Team, Intermountain Healthcare <https://intermountainhealthcare.org/> Management Board, Specifications Program Lead, openEHR Foundation <http://www.openehr.org> Chartered IT Professional Fellow, BCS, British Computer Society <http://www.bcs.org/category/6044> Health IT blog <http://wolandscat.net/> | Culture blog <http://wolandsothercat.net/>
_______________________________________________
openEHR-technical mailing list
[email protected]
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org

Reply via email to