Trying to understand the openEHR Information Model
big risk - it's a combination of how likely it is, and how bad it is if they are. Generally, current location, current medication lists, summary lists are things where contention can happen. Quite often, I've seen, a cascade of things will happen on a patient simultaineously as multiple people focus on the patient The other place where contention is a problem I've experience has been pathology reports that are not complete - in a busy lab doing 2000 reports/day, I observed editing contention 10-20x a day on average. That's pretty low, but the consequences of a clash bad. Grahame On Mon, Apr 15, 2013 at 11:25 PM, Bert Verhees bert.verhees at rosa.nl wrote: On 04/15/2013 02:56 PM, Grahame Grieve wrote: well, that's true for some parts of the record - the historical parts. Other parts, summary parts, that's quite untrue. In most enterprise systems, records tend to be rarely updated, or intensively updated, and not much between Can you give an example of parts of records which are at big risk for competitive updates? Thanks Bert. __**_ openEHR-technical mailing list openEHR-technical at lists.**openehr.orgopenEHR-technical at lists.openehr.org http://lists.openehr.org/**mailman/listinfo/openehr-** technical_lists.openehr.orghttp://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- - http://www.healthintersections.com.au / grahame at healthintersections.com.au/ +61 411 867 065 -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130415/d0c05391/attachment.html
Trying to understand the openEHR Information Model
Yes, in the lab situation we typically saw this multiple times a day - multiple people trying to update the same cluster of records at the same time. So the scenario is a typical relational database- a cluster of related records, some information in fields, and some in blobs as a structured text. Someone would start editing that cluster in a GUI, and then either someone else or a machine would also want to perform some operation that caused updates to some portion of the same cluster of records. A user might spend several minutes editing the record - or even several hours, particularly if they get distracted by phone calls, and it's a complex report like an autopsy, for instance. So you can't afford to do this as database transactions, but you can't afford to do either version based merging, or to lose either the previously committed information, or the newly committed information - and the users managing this are not abstract thinkers with the time to figure out the clash. And losing good clinical infornation due to bad IT - the users are particurlarly intolerant of this. And as I said, it happened much more often than you'd expect. I spent a couple of years refining the kestral system for managing this issue. I haven't seen the same against current lists in an EHR - just that they are updated continually. I've no reason to think that the in principle issue is different, though the frequency might be. To Randy's point - managing concurrency is a real issue. Period. Grahame On Tue, Apr 16, 2013 at 2:08 AM, Thomas Beale thomas.beale at oceaninformatics.com wrote: On 15/04/2013 14:37, Grahame Grieve wrote: big risk - it's a combination of how likely it is, and how bad it is if they are. Generally, current location, current medication lists, summary lists are things where contention can happen. Quite often, I've seen, a cascade of things will happen on a patient simultaineously as multiple people focus on the patient The other place where contention is a problem I've experience has been pathology reports that are not complete - in a busy lab doing 2000 reports/day, I observed editing contention 10-20x a day on average. That's pretty low, but the consequences of a clash bad. Grahame - can you elucidate on this? Are you saying that you have seen multiple parallel committers trying to update the same lab report (same patient, order etc) at the same time? The only way I can imagine this is if multiple specialist lab systems contribute to a common overall report (i.e. some kind of order grouping). In this case, there is unavoidably logic to do with how the pieces get stitched together anyway, so I am not sure how contention errors could arise. - thomas __**_ openEHR-technical mailing list openEHR-technical at lists.**openehr.orgopenEHR-technical at lists.openehr.org http://lists.openehr.org/**mailman/listinfo/openehr-** technical_lists.openehr.orghttp://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- - http://www.healthintersections.com.au / grahame at healthintersections.com.au/ +61 411 867 065 -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130416/a8f13ae9/attachment.html
Trying to understand the openEHR Information Model
well, yes, there'd be nothing lost, and everything would be in the database. But if the users can only see the last update, then prior stuff is lost anyway. If, on the other hand, users can see the older updates, then they'd simply have no idea what information was current. I think of that last as the worst possible outcome. Grahame On 16/04/2013, at 4:43 AM, Bert Verhees bert.verhees at rosa.nl wrote: On 04/15/2013 08:37 PM, Grahame Grieve wrote: but you can't afford to do either version based merging, or to lose either the previously committed information But what if every user, nurses or GP create a new composition, when they do an addition. Then there is nothing lost. Bert ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org
Trying to understand the openEHR Information Model
These scenarios were one of the reasons we were very careful to properly model commit time (system time) separately from the times of the visit, observations, actions etc (world time). The commit of the info may come days late, but it is always easy to determine a) what other clinicians could see on the system at time T and b) in what order things happened in clinical reality. The caveat is that the system won't tell you the full story until everyone has committed their data. This doesn't mean there are no tricky competitive write situations, but via the above, and the versioning semantics (which include system-based branching), there are reasonably obvious strategies for correctly resolving the confusion. - thomas On 15/04/2013 20:11, Karsten Hilbert wrote: On Mon, Apr 15, 2013 at 08:40:59PM +0200, Bert Verhees wrote: On 04/15/2013 06:12 PM, Thomas Beale wrote: patient sees the GP, then visits a practice nurse, without the GP record being committed first. yes, that's certainly a possibility, if the practice solution isn't designed to deal with it, and the staff are not trained... In the Netherlands there is, what we call, the door-handle-patient. At the moment he is leaving the room, and is busy opening the door, he tells what he is really worried about. That's standard GP land. The GP asks the patient to sit down for an extra minute and explains why he thinks it is not cancer, or he makes another appointment because he thinks the patient has a point.. So a GP at latest should commit after the door is closed and the patient has definitely gone and just before the new patient enters. For one thing that moment (the patient being gone for good) never comes in reality. However, there's no need to define such a moment in time. The GP writes into the EMR whatever is known at any point during the consultation. Yes, that will be subject to editing, deleting, amending, but that's normal ! The nurse (that is, any other workplace of the GPs network) will see whatever has been committed. Whenever something is committed a change notification is pushed out by the storage engine and clients can update themselves if relevant (that's how GNUmed does it). This, of course, does not yet solve the conflict of the user editing something that's just being changed but at least there's no chance to not be aware of it. At the moment a patient arrives again at the nurses or assistants-desk, the dossier should be fully up to date, or it should be recognizable that it is not up to date In reality fully up to date never happens. It is always the current state of affairs. and then the nurse has to wait until the lock is released. Ah, no, it doesn't make a difference whether the nurse waits for a lock to be released or not - because even if the GP released the lock the nurse has no way of knowing whether the GP committed everything (instructions) needing committing or whether the GP forgot something. That can only be assured by out-of-band means, say, the patient knowing what the nurse needs to do for him (or GP and patient agreeing and sending an action sheet *before* the patient leaves the room -- and still that does not prove the GP does not remember something needing doing after the patient left the exam room). It is a problem not solvable by technical means alone. Karsten -- Ocean Informatics *Thomas Beale Chief Technology Officer, Ocean Informatics http://www.oceaninformatics.com/* Chair Architectural Review Board, /open/EHR Foundation http://www.openehr.org/ Honorary Research Fellow, University College London http://www.chime.ucl.ac.uk/ Chartered IT Professional Fellow, BCS, British Computer Society http://www.bcs.org.uk/ Health IT blog http://www.wolandscat.net/ * * -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130416/fa0de6c9/attachment-0001.html -- next part -- A non-text attachment was scrubbed... Name: ocean_full_small.jpg Type: image/jpeg Size: 5828 bytes Desc: not available URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130416/fa0de6c9/attachment-0001.jpg
Mimetype ADL
Hi, Is there a mimetype defined for ADL-files? And if not, what is advised to use? Thanks Bert
Trying to understand the openEHR Information Model
Hi Gavin and others! On Mon, Apr 15, 2013 at 4:39 PM, gjb gjb at crs4.it wrote: I thought about this a few years ago and came to the conclusion that the GUI/Client would need quite a bit of savvy HCI. The person working on the data need to be kept informed of how/when the system maybe changing under him. Google documents has now come along and does something like that. You're busy editing one section of an article then a networked colleague begins to edit the same thing. GDocs tells you who it is and how to communicate with them by a secondary channel (EHR would be the primary channel). You can both still keep editing but at least you know you are going to have double check the result afterwards. Conflict resolution is best avoided by timely human intervention rather than automated attempts afterwards. And GDocs does well even when clients go offline for a short time. [...] Gavin Brelstaff - CRS4 Some of the magic behind multi-user/multi-device editing in Google docs is referred to as operational transformation algorithms. Have a look at for example: http://www.codecommit.com/blog/java/understanding-and-applying-operational-transformation or http://en.wikipedia.org/wiki/Operational_transformation Very interesting stuff when you look closer at it. Some years ago one of our student projects used some of that power provided in Google Wave to experiment with an experimental partial implementation of a multi-user archetype editor. In that case the operational transformation operated on XML pieces. The simplest case is operations on plain text - that case is usually described in explanations. Open source implementations of operational transformation working with for example pieces of JSON are also available (http://sharejs.org/). In the upcoming (BMC accepted) paper Applying representational state transfer (REST) architecture to archetype-based electronic health record systems (and briefly in my thesis) - I mention the thought of using operational transformation in the EHR editing stage taking place before doing real openEHR contribution commits. This would be a possibly interesting replacement or upgrade of the Contribution Builder component described in the paper. It would allow for simultaneous shared multi-user and multi-device data entry for many (but not all) use cases. It won't scale to thousands of users simultaneously preparing the same contribution for the same patient but it should scale well for a handful of simultaneous users per patient if they are somewhat aware of each others duties and responsibilities. The possibility to flag openEHR content as incomplete would allow snapshots from the shared contribution build to be persisted in the proper EHR at a regular interval and/or actively triggered by the user when they need to shift attention to other things. Later when considered complete, another version could be marked as complete, be signed and committed. If anybody currently has time or resources (e.g. master thesis students) to pursue an operational transformation openEHR data entry approach in an open source project, then don't hesitate to contact me for more detailed discussions and potential cooperation. A bit wiser from my work with the repeatedly delayed REST implementation and publication approach, I'd prefer to do such experimentation in an incremental, multi-site, open, public way instead of only having a big publication/delivery in the end. Best regards, Erik Sundvall erik.sundvall at liu.se http://www.imt.liu.se/~erisu/ P.s. Qoute from the upcoming paper Applying representational state transfer (REST) architecture to archetype-based electronic health record systems: Shared contribution builds is another interesting potential future work. The current contribution builder design works best if a contribution build is personal and the user uses one device at a time for editing it. For more dynamic teamwork or multimodal or multidevice data entry, using systems based on Operational Transformation (OT) would likely enhance the user experience. OT is a lock-free resolution mechanism that is used for example in collaborative systems like Google Docs and Apache Wave (formerly Google Wave). Since OT involves many small transactions a use of approaches like WebSockets for accessing shared contribution builds is anticipated. When explored properly, specific OT application recommendations should be added to the architecture. -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130416/08fb3a55/attachment.html
Trying to understand the openEHR Information Model
with 95 nodes of data in some very specific tree structure - the database and query service just keep working. Referencing, larger logical structures like 'episodes', and the update semantics don't come for free, and require careful design. So I think we have bought into a new area of difficulty, as the price of quite a significant gain over 'single level' systems where the class model or ER model encode all the information semantics. We need something to keep us off the streets... - thomas ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130416/e14f1c0b/attachment.html
Trying to understand the openEHR Information Model
in very sophisticate ways to represent things like care plans, medication histories and so on. I can't point to a spec right now, but they will start to appear. What is the motivation for that? To increase the granularity of externally-referenceable objects? What current problem would this solve? for example: provide a fast retrieval 'map' of all medications, including all actions, for some care plan, e.g. chemotherapy. We need something to keep us off the streets... Not a worry for you, sir. I'll embarrass you by letting on here how impressed I've been with the raw intellect everywhere evident in what I take to be chiefly your creation and the literary talent you have exercised in making it all clear. Great work! ah - don't blame me for it. I added some engineering understanding and integration along the way, but this work started with a bunch of very smart clinical people who gathered the best set of requirements for the 'EHR' concept, during the Good European Health Record project. One of them, Dr Dipak Kalra (now head of department of CHIME at UCL), wrote his PhD thesis http://eprints.ucl.ac.uk/1584/ on EHR requirements, and one outcome of that was the ISO 18308 standard, on the same topic. Sam Heard and other physicians were key in developing these requirements and the understanding they have given to the domain have greatly affected the quality of the development. This plus numerous technical people, debates, conferences etc have led to the specifications http://www.openehr.org/programs/specification/releases/currentbaseline you see today. Have a look at the revision histories, particularly on the EHR IM and Data types - you'll see a lot of names. - thomas -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130416/c6b27f52/attachment-0001.html
Trying to understand the openEHR Information Model
a persistent Composition) as it was at the moment of committal. If the referencing Composition were retrieved, and that link dereferenced, an older version of the care plan will be retrieved. If the latter, how does one avoid having to recommit whole sets revised compositions involved in the affected thread of links? It would seem that you can't just swap out one item in a tangled web, at least without some very sophisticated compensatory activities. Or maybe links are somehow named in such a way as always to point to the latest version of something, which you seemed to suggest is possible (version-proof links?). OpenEHR is a remarkable piece of technology. An EHR record is externally a collection of independent and separate documents called Compositions that can be invalidated and versioned and swapped out at any time. Yet, logically and internally, it is magically a vast graph of nodes and edges and vertices, with connections not just within archetypes but also between archetypes. Logically, the nodes (typically archetypes) are not deleted (usually) nor do they lose their initial identity when their contents change or when links between them are altered. One wonders, then, why not just use a graph DB instead of a collection of documents to house the information? Wouldn't that be a shorter path to the same end and reduce some of the versioning complexity (you'd say that would increase versioning complexity)? Perhaps there are some openEHR implementations that are doing just that. No? Could an openEHR system use a graph DB and still be considered openEHR? absolutely. Using path-based blobbing probably isn't a million miles from such DBs. Personally I used a wonderful object database called Matisse (still around today), which essentially operates as a graph db with write-once semantics, and I would love to have a side-project to build an openEHR system on that. Nevertheless, there are a couple of container levels that have significance in models like openEHR, 13606, CDA and so on: the Composition (can be seen as a Document) and the Entry (the clinical statement level). So it's not completely mad to do blobbing at these levels, or build in other assumptions around them. Do you have a picture or map, somewhere, of your metadata graph, or must I examine individual achetypes to see all the links between them? there is an emerging set of 'second order' object definitions, that use the URI-based referencing approach in very sophisticate ways to represent things like care plans, medication histories and so on. I can't point to a spec right now, but they will start to appear. What is the motivation for that? To increase the granularity of externally-referenceable objects? What current problem would this solve? for example: provide a fast retrieval 'map' of all medications, including all actions, for some care plan, e.g. chemotherapy. We need something to keep us off the streets... Not a worry for you, sir. I'll embarrass you by letting on here how impressed I've been with the raw intellect everywhere evident in what I take to be chiefly your creation and the literary talent you have exercised in making it all clear. Great work! ah - don't blame me for it. I added some engineering understanding and integration along the way, but this work started with a bunch of very smart clinical people who gathered the best set of requirements for the 'EHR' concept, during the Good European Health Record project. One of them, Dr Dipak Kalra (now head of department of CHIME at UCL), wrote his PhD thesishttp://eprints.ucl.ac.uk/1584/on EHR requirements, and one outcome of that was the ISO 18308 standard, on the same topic. Sam Heard and other physicians were key in developing these requirements and the understanding they have given to the domain have greatly affected the quality of the development. This plus numerous technical people, debates, conferences etc have led to the specificationshttp://www.openehr.org/programs/specification/releases/currentbaselineyou see today. Have a look at the revision histories, particularly on the EHR IM and Data types - you'll see a lot of names. - thomas ___ openEHR-technical mailing list openEHR-technical at lists.openehr.org http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org -- next part -- An HTML attachment was scrubbed... URL: http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130416/04ffeaeb/attachment.html