Hi Bert,

On Sat, Feb 17, 2018 at 4:57 AM, A Verhees <[email protected]> wrote:

> Hi, sorry to interfere, if II understand well,
>
> I think a possible problem could be that respiratory infection caused by a
> virus can return some derived codes to be returned although in this case it
> are not so many.


I think that is the expected behavior, also what is returned depends on the
operators in the SNOMED expression, like << will return all descendants.
The expression can be tuned to return just what you want, and the process
of the result should work considering those results.


>
> However to use this mechanism generally, it can happen that really many
> derived codes will be returned from the SNOMED engine, and in that case the
> AQL query would need to be executed many times. Once for each possible
> derived code.
>

> One could also consider to hand over the result set from AQL to the SNOMED
> engine to see if it is derived, which could cause less executions.
>
> But in both cases it is datamining which is always difficult to estimate
> what the best strategy in a specific case is.
>
> A good idea maybe to design an intelligent query-strategy-decision engine
> which offers advice to see what works best. This engine could execute
> limited queries, for example, with a count operator so that it does not
> need to go all the way when a limit is reached.
>
> It is true what you write that datamining queries seldom are expected to
> return in real time, but I have seen situations in marketing were they ran
> for hours and queried almost one million dossiers, we even created in
> between databases.
>
> That decision engine could also be an external service.
>
> It is good to hear that you think about separated services anyway. That
> works in the advantage of a microservices architecture.
>
> Bert
>
> Op za 17 feb. 2018 04:30 schreef Pablo Pazos <[email protected]>:
>
>> Hi Pieter,
>>
>> On Fri, Feb 16, 2018 at 12:27 PM, Pieter Bos <[email protected]>
>> wrote:
>>
>>> Sounds like a good proposal Pablo.
>>>
>>>
>> The idea came from a PoC I did last year integrating SNOMED expressions
>> into openEHR queries.
>>
>> IMO complex queries will need external very specialized micro services to
>> be evaluated, and ADL syntax is not enough to represent some complex
>> queries.
>>
>> For instance, "give me all patients that had a diagnosis of respiratory
>> infection caused by virus, from any composition whose archetype is
>> specialized from X".
>>
>>
>> 1. "diagnosis" = archID (Problem/Diagnosis) + path_to_coded_diagnosis
>> 2. "respiratory infection caused by virus" = SNOMED Expression (<<
>> 275498002 |infección del tracto respiratorio (trastorno)| : 246075003
>> |agente causal| = 49872002 |virus|)
>> 3. "composition defined by a specialization of archetype X" = list of
>> arch IDs in a specialization hierarchy
>>
>>
>> 1. Is part of openEHR and the internal data repo model
>> 2. Should be evaluated against a service that will return codes in a
>> plain list
>> 3. Should be evaluated against a service that will return archetype ids
>> in a plain list
>>
>> With 2 and 3 resolved, it is easy to transform everything into SQL or
>> whatever, execute the query, get results and transform results in some
>> representation (JSON, XML, CSV, etc).
>>
>> We could still add complexity to the query, and will need more external
>> microservices to resolve instead of implementing everything internally
>> generating unmaintainable/unescalable/less interoperable systems.
>>
>> Another idea is to think of query "features", so we can define profiles,
>> for instance my queries support SNOMED expressions, your queries support
>> querying by archetype hierarchies, etc. If we standardize the features it
>> will be easy to have compliance statements for each system, marking which
>> features are implemented on an openEHR CDR. And the idea of having external
>> micro services that can help implementing those features would help to have
>> full featured query interfaces on many different systems, and even reach
>> query interoperability (we are far from that right now).
>>
>>
>>
>>> For ADL 2 a single archetype api can be used for both archetypes and
>>> templates. However, it makes sense to allow the get api of archetypes to
>>> specify the form you want the result in: differential, flattened, or
>>> operational template (opt2).
>>>
>>
>> IMO most endpoints on the API should be agnostic of the ADL version and
>> work with template ids, archetype ids and paths. But of course, if we have
>> template management endpoints, that will be ADL/OPT version dependant,
>> since formats and models differ.
>>
>>
>>>
>>> Our EHR still will integrate the archetype part and query part, as well
>>> as the option to choose a used archetype for a slot at runtime. Could all
>>> be built with separate services and apis, but once you have everything
>>> integrated it makes for a very easy to use API for both EHR and
>>> datawarehouse usage. without needing sophisticated client libraries.
>>> However, you need much more complex server side tools in the EHR of course.
>>>
>>
>> Of course, that is a design/implementation decision. I agree having all
>> the "features" implemented in one place adds a lot of complexity to the
>> system, while might improve performance.
>>
>> A little thought about performance, I think most queries don't need to be
>> executed in real time, and a lot of background work and caching can be done
>> to make things work fast, so I don't see an issue on having complex queries
>> and use multiple external micro services to be able to evaluate the whole
>> query.
>>
>>
>> Best,
>> Pablo.
>>
>>
>>>
>>> Regards,
>>>
>>> Pieter
>>>
>>> Op 16 feb. 2018 om 15:48 heeft Pablo Pazos <pablo.,@cabolabs.com> het
>>> volgende geschreven:Ivo
>>>
>>> Hi Pieter,s
>>>
>>> Besides the API, I think for ADL2 archetypes and templates/OPTs have the
>>> same model, and archetype IDs / template IDs will follow the same
>>> structure.So for ADL2 using archetypes or templates would be the same in
>>> the API.
>>>
>>> Which endpoints do you find problematic in terms of using ADL2?
>>>
>>>
>>> About querying, analyzing your use case, I think there are two ways of
>>> knowing the full specialization hierarchy, one is to query an archetype
>>> repo/CKM while evaluating a query and do not have that info in the data
>>> repo. Like "give me all archetype IDs that specialize arch ID X", this will
>>> be [A, B, C], then use that list on the query in your data repo like
>>> "archetype_id IN [A, B, C]".
>>>
>>> The other option is to have the archetype repo/CKM integrated with the
>>> clinical data repo (which I don't like architecturally speaking), so the
>>> "give me all the archetype IDs that specialize arch ID X" is resolved
>>> internally.
>>>
>>> Considering there is a component that has knowledge about the
>>> specialization, and that can be used internally (behind the API) I don't
>>> see the need of adding explicit support in the API for archetypes.
>>>
>>> What I think is better is to define an archetype repo/CKM API to manage
>>> archetypes and to resolve specialization queries among other queries like
>>> "this path exists for this archetype ID?", etc. If this is possible, we can
>>> have interoperability between archetype repos and your queries can use my
>>> repo to get specialization info, and vice-versa.
>>>
>>>
>>> Best,
>>> Pablo.
>>>
>>>
>>>
>>> On Thu, Feb 15, 2018 at 1:09 PM, Pieter Bos <[email protected]
>>> <mailto:[email protected]>> wrote:
>>> I noticed the recent version of the REST api  only works with templates,
>>> and not archetypes.
>>>
>>> As our EHR is ADL 2 only, this has some interesting consequences.
>>>
>>> Of course, you can upload just the operational templates and probably
>>> create a fully functional EHR, but if you work with ADL 2, you would always
>>> need an external tool to create the OPT2 from the ADL repository that you
>>> want to use, instead of an EHR that generates the OPT2s from the source
>>> archetypes itself. Of course, you could just use the ADL workbench or the
>>> Archie adlchecker command-line utility to do it for you, but I’m not sure
>>> if it’s a nice thing to do.
>>>
>>> Also if you only use OPT2, you might want to do queries such as
>>> ‘retrieve all information that is stored in an EHR that has been derived
>>> from archetype with id X, including archetypes specializing from X’ (not
>>> just operational template X). An example: retrieve all reports, or all care
>>> plans, regardless of the used template. To do that, you probably need a
>>> specialization section in the OPT2, which according to the specs should
>>> have been removed, and you also need to create operational templates for
>>> archetypes that you never use to directly create compositions from. Or the
>>> tool querying the EHR must be fully aware of the full archetype
>>> specialization tree and use that to create a query with all specializing
>>> archetypes included in the query.
>>>
>>> So, is it intended that the REST API only works with operational
>>> templates, and never archetypes?
>>>
>>> Regards,
>>>
>>> Pieter Bos
>>>
>>> From: openEHR-technical <[email protected]
>>> <mailto:[email protected]>> on behalf of
>>> Thomas Beale <[email protected]<mailto:[email protected]>>
>>> Reply-To: For openEHR technical discussions <openehr-technical@lists.
>>> openehr.org<mailto:[email protected]>>
>>> Date: Friday, 26 January 2018 at 14:23
>>> To: Openehr-Technical <[email protected]<mailto:
>>> [email protected]>>
>>> Subject: openEHR REST APIs - Release 0.9.0 / invitation for comments
>>>
>>>
>>> The REST API Team (Bostjan Lah, Erik Sundvall, Sebastian Iancu, Heath
>>> Frankel, Pablo Pazos, and others on the SEC<https://www.openehr.org/
>>> programs/specification/editorialcommittee> and elsewhere) have made a
>>> 0.9.0 Release of the ITS (Implementation Technology Specifications)
>>> component, in order to make a pre-1.0.0 release of the REST APIs available
>>> for wider comment.
>>>
>>> The key point about this current release is that it is meant to be a
>>> 'core basics' foundation of APIs to build on, and some services like CDS,
>>> and more sophisticated querying (e.g. that Erik Sundvall has published in
>>> the past) will be added over time.
>>>
>>> NOTE THAT IN THE 0.9.0 RELEASE, BREAKING CHANGES ARE POSSIBLE.
>>>
>>> You can see the ITS Release 0.9.0 link here<https://www.openehr.org/
>>> programs/specification/latestreleases>, while the links you see on the
>>> specs 'working baseline' page<https://www.openehr.org/
>>> programs/specification/workingbaseline> are the result of 0.9.0 plus
>>> any modifications made due to feedback. The .apib files are here in Github<
>>> https://github.com/openEHR/specifications-ITS/tree/master/REST_API>
>>> (see mainly the includes directory).
>>>
>>> We are aiming to release 1.0.0 at the end of February, at which point
>>> the formal process kicks in.
>>>
>>> Since we are in Release 0.9.0, the formal PR/CR process is not needed,
>>> and you can comment here. However, if you want to raise a PR you can, in
>>> which case, please set the Component and Affects Version fields
>>> appropriately:
>>>
>>> [cid:[email protected]]
>>>
>>> - thomas  beale
>>> --
>>> Thomas Beale
>>> Principal, Ars Semantica<http://www.arssemantica.com>
>>> Consultant, ABD Team, Intermountain Healthcare<https://
>>> intermountainhealthcare.org/>
>>> Management Board, Specifications Program Lead, openEHR Foundation<
>>> http://www.openehr.org>
>>> Chartered IT Professional Fellow, BCS, British Computer Society<
>>> http://www.bcs.org/category/6044>
>>> Health IT blog<http://wolandscat.net/> | Culture blog<
>>> http://wolandsothercat.net/>
>>>
>>> _______________________________________________
>>> openEHR-technical mailing list
>>> [email protected]<mailto:openEHR-
>>> [email protected]>
>>> http://lists.openehr.org/mailman/listinfo/openehr-
>>> technical_lists.openehr.org
>>>
>>>
>>>
>>> --
>>> Ing. Pablo Pazos Gutiérrez
>>> [email protected]<mailto:[email protected]>
>>> +598 99 043 145
>>> skype: cabolabs
>>>         [https://docs.google.com/uc?export=download&id=0B27lX-
>>> sxkymfdEdPLVI5UTZuZlU&revid=0B27lX-sxkymfcUwzT0N2RUs3bGU2UUovakc4
>>> VXBxWFZ6OXNnPQ] <http://cabolabs.com/>
>>> http://www.cabolabs.com<http://www.cabolabs.com/>
>>> https://cloudehrserver.com<https://cloudehrserver.com/>
>>> Subscribe to our newsletter<http://eepurl.com/b_w_tj>
>>>
>>> _______________________________________________
>>> openEHR-technical mailing list
>>> [email protected]<mailto:openEHR-
>>> [email protected]>
>>> http://lists.openehr.org/mailman/listinfo/openehr-
>>> technical_lists.openehr.org
>>> _______________________________________________
>>> openEHR-technical mailing list
>>> [email protected]
>>> http://lists.openehr.org/mailman/listinfo/openehr-
>>> technical_lists.openehr.org
>>
>>
>>
>>
>> --
>> Ing. Pablo Pazos Gutiérrez
>> [email protected]
>> +598 99 043 145 <099%20043%20145>
>> skype: cabolabs
>> <http://cabolabs.com/>
>> http://www.cabolabs.com
>> https://cloudehrserver.com
>> Subscribe to our newsletter <http://eepurl.com/b_w_tj>
>> _______________________________________________
>> openEHR-technical mailing list
>> [email protected]
>> http://lists.openehr.org/mailman/listinfo/openehr-
>> technical_lists.openehr.org
>
>
> _______________________________________________
> openEHR-technical mailing list
> [email protected]
> http://lists.openehr.org/mailman/listinfo/openehr-
> technical_lists.openehr.org
>



-- 
Ing. Pablo Pazos Gutiérrez
[email protected]
+598 99 043 145
skype: cabolabs
<http://cabolabs.com/>
http://www.cabolabs.com
https://cloudehrserver.com
Subscribe to our newsletter <http://eepurl.com/b_w_tj>
_______________________________________________
openEHR-technical mailing list
[email protected]
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org

Reply via email to