Hi!

When we have approached openEHR storage we have tried to keep use
cases separate and not solve everything with only one database
structure.

A simplified (perhaps naive) version of the reasoning:
1. If you want to access data as big chunks in a certain serialization
format (like openEHR COMPOSITIONs used in daily patient care), then
store them in such chunks (and index on fields relevant to your
retrieval needs).
2. If you want to access small data fields from huge amounts of EHRs
(like population wide statistics/epidemiology) then store them as such
small pieces.

OpenEHR's append-only (or "never physically delete") principle
combined with it's clearly timestamped operations makes replication
between these databases easier. If the DB in use case 2 is not used
for entering data and if it can tolerate a few minutes lag-time behind
live EHRs (1.) then implementation is not that very hard.

Some more hints regarding our reasoning is available in the poster at:
http://www.imt.liu.se/~erisu/2010/EEE-Poster-multipage.pdf
A proper detailed paper is (still) in the works...

I suspect that if you would aim for a "1.5" in between 1 and 2 then
what you will get is exactly a compromise not optimal for any of the
above mentioned use cases. :-)

Best regards,
Erik Sundvall
erik.sundvall at liu.se http://www.imt.liu.se/~erisu/? Tel: +46-13-286733



On Mon, Jun 6, 2011 at 18:05, Ian McNicoll
<Ian.McNicoll at oceaninformatics.com> wrote:
> Hi Alberto,
>
> A few naive comments/questions from a clinical modelling perspective.
>
> The granularity of the archetypes is mostly determined by issues of
> clinical validity and reuseability, rather then performance. We do try
> to keep archetype tree structures as 'flat' as possible i.e remove
> unnecessary clusters, and some work needs to be done to revise some of
> the older draft CKM archetypes e.g the OBSERVATION.examination series
> in this regard. It would be interesting to know what you meant by the
> question - can you give some a couple of examples of more and less
> granular archetypes, from your perspective?
>
> We have recently been able to develop a high performance EHR using 80%
> CKM archetypes (or those of similar granularity/re-useability) with
> querying 100% via AQL. The key change we made to improve performance
> was to make sure that dates are well-supported by indexing e.g date
> recorded, Composition start_date, observation time etc. Most
> operational queries in a live EHR are time-based - e.g Chart of most
> recent results, Current admissions etc. OTOH many reporting style
> queries will be terminology-based i.e patients with diabetes, and I
> expect this is an area where specific indexing might help further. I
> know that UK GP systems which have traditional RDBMS type
> architectures have extensive indexing on diagnostic codes.Workflow
> indexing will also be important in other applications for tracking
> orders and resultant activities. This is a facility that Ocean is
> currently implementing in OceanEHR.
>
> So indexing on dates, diagnosis / procedure codes and workflow IDs is
> probably the key.
>
> You might find it helpful to speak to the HL7 RIMBAA community.
> Although starting from a very different RM, they are essentially
> facing a similar low-level engineering problem. (I will get killed by
> both communities for that statement!!).
>
> I am interested in your question re granularity - can you explain
> further what you were concerned about?
>
> Cheers,
>
> Ian
>
>
> Dr Ian McNicoll
> office +44 (0)1536 414 994
> ?? ? ? ? +44 (0)2032 392 970
> fax +44 (0)1536 516317
> mobile +44 (0)775 209 7859
> skype ianmcnicoll
> ian.mcnicoll at oceaninformatics.com
>
> Clinical Modelling Consultant,?Ocean Informatics, UK
> openEHR Clinical Knowledge Editor www.openehr.org/knowledge
> Honorary Senior Research Associate, CHIME, UCL
> BCS Primary Health Care ?www.phcsg.org
>
>
>
>
> On 3 June 2011 13:27, Alberto Moreno Conde <albertomorenoconde at gmail.com> 
> wrote:
>> Dear all,
>>
>> Within the Virgen del Rocio University Hospital we are analysing how to
>> implement a EHR based on Dual Model Approach.? When we analysed direct
>> implementation a database based on of either OpenEHR Reference Model? or ISO
>> 13606, we have detected that it could have slow performance . Given that we
>> are concerned about this problem, we would like to know possible strategies
>> have been identified by implementers in order to fasten the performance of
>> storage and query.
>>
>> Also the granularity level is one open issue that impacts on the
>> performance, I would like to know if the level of granularity of the
>> archetypes contained within the OpenEHR CKM is able to satisfy the
>> requirements of? an EHR with more than 1 million records.
>>
>> Kind Regards
>>
>> Alberto
>>
>> Alberto Moreno Conde
>> GIT-Grupo de Innovaci?n Tecnol?gica
>> Hospital Universitario Virgen del Roc?o
>> Edif. Centro de Documentaci?n Cl?nica Avanzada
>> Av. Manuel Siurot, s/n.
>> C.P.: 41013 ???SEVILLA
>>
>>
>> _______________________________________________
>> openEHR-technical mailing list
>> openEHR-technical at openehr.org
>> http://lists.chime.ucl.ac.uk/mailman/listinfo/openehr-technical
>>
>>
>
> _______________________________________________
> openEHR-technical mailing list
> openEHR-technical at openehr.org
> http://lists.chime.ucl.ac.uk/mailman/listinfo/openehr-technical
>


Reply via email to