On 28-06-18 10:33, Thomas Beale wrote:

On 27/06/2018 13:00, Bert Verhees wrote:
Dear Seref, I do not agree with this without having explored all the possibilities. I think it is important not to jump to conclusions and keep the discussion open. I have some ideas how to keep it interoperable. I like to discuss that with an open mindset.

Talking about interoperability.

By the way, how do you create FHIR messages from OpenEhr-compositions? Or how do you create Openehr-compositions from FHIR messages? You have to create a template manually, fitting that item to that datapoint, isn't it?

that is correct, because FHIR imposes its own model. This is the basic reason why one should not really use message standards to interoperate over systems whose data are already transparently structured. However, some organisations want to pay for these pointless conversions, so people do them.

Stability and Mapping:
I think FHIR is good, because it is a stable model, and mapping to/from FHIR can be used for long time, and FHIR is also much used, so mappings can be used in more occasions. There are also disadvantages, like the HTTP-REST protocol which it incorporated. Google is now planning a GRPC protocol for FHIR, and that is promising, because every datatype can have its own GRPC field predefined, and the performance can really improve very much, maybe even 100 times as fast. As a rule of thumb one could say: Never use REST/JSON/HTTP1.1 for stable models, it is throwing away a lot of performance.

Transparancy:
Data must not only be transparent in a way that people can understand them, but they must also be transparent in a way that the software-internals of the sender and receiver can handle them. For that purpose they need to be mapped from and to these internal processes. If a GP receives a FHIR message and maps it to his own EHR-tables, then the data from that message become available in the normal working screens of the doctor. That is transparency that is needed.



Even within two parties using OpenEhr. You are only automagically interoperable when two parties use exact the same archetypes, else you need to puzzle the dataitems.

they only need to use the same data points of those archetypes, or else any specialised derivative. This isn't hard to achieve; pretty clearly all systems using openEHR today use the same vital signs archetypes or derivatives to record vital signs. There is no point doing otherwise.
I don't know if that is true, but if you say so, I accept that statement, also because it is restricted to vital signs.
https://en.wikipedia.org/wiki/Vital_signs



The same things you have to do when you need to handle a generated archetype. But it will not be that hard. Don't expect much complexity from these generated archetypes.

I've missed some of the earlier discussion, but unless you are dealing with genuinely novel measurements or orders, you won't have any 'generated archetypes' for most Observations or Instructions or Actions. You might have some for novel questionnaires or other kinds of assessment tools (new kind of score etc). But for the vast majority of cases, I would think the real need is for runtime /matching/ of data points from /existing/ archetypes to create on-the-fly templates, something we've known about for 15 years.

I agree, there are not an endless number of data-points-types. They could also be predefined. We would need sport-coaches, athletes and so on to help us with that.

I called them before, micro-archetypes, containing only one datapoint, or a few closely related datapoints.

Let's assume some of these are created, for the reasons mentioned above; pretty soon you are going to want to curate them properly and add them to the library. Over time, the number of 'generated archetypes' will fall to nearly zero, and it will be the matching process that is the main challenge when encountering data not planned for.

With machine learning algorithms, it must not be hard to interpret them.

Don't understand me wrong, I like OpenEhr, because of the archetyped system, and the flexibility it offers. It is not by accident that I discuss it here and not in a HL7 group, although that would bring more money.

But if flexibility is slowed down by years of review, discussing and consensus over the whole world for a set of archetypes, then there is not much flexibility left.

it is slowed down, that's true, and it could be faster. But I don't see how that reduces flexibility.

The inflexibility is in giving the proceedings out of hands, losing control, having to deal with changes which are not asked for or wanted. The data-points would need to be as simple as possible, mostly in ELEMENT-structure instead of CLUSTER, or only very simple CLUSTERS when 1 data-point is not sufficient. No deep structures, I would advise.


This can work very good for the archetypes which are in CKM, but all those new devices, all those new datatypes, all this new protocols, which cannot wait for these review-procedures, because the market will be jumped far ahead by then.

I agree there is a need to be able to create archetypes much more quickly based on device specifications. We need to work on that.

Yes, I agree

Bert
_______________________________________________
openEHR-clinical mailing list
openEHR-clinical@lists.openehr.org
http://lists.openehr.org/mailman/listinfo/openehr-clinical_lists.openehr.org

Reply via email to