On Wed, 2004-03-03 at 03:04, Thomas Beale wrote:that at least is not something I worry about too much - since we already built a prototype of the same based on GeHR models. It was of course not a large all-singing all-dancing EHR server, but it did have the hard parts (archetype based data creation and validation) built-in. It wasn't easy, either, but it worked fine. And the models of both archetypes and reference concepts are much more disciplined now, so it will be better.
I have to admit that it is only now that we have begun to write primers of various kinds to help understanding - they will start to appear on the web. But in the end, nothing substitutes for presentations and interactive communication...
Pedagogic and didactic materiel are useful, but the thing which will really win people over is the availability of an openEHR storage/query engine. Until then, for many of us, openEHR remains an extremely interesting theoretical thought experiment. But as we all know, the gap between theory and practise tends to be small in theory but large in practise. My concerns are that: a) the storage/query engine will be much harder to build and validate than anyone thinks;
there will no doubt be gaps. That is just life. But we are pragmatic too;-). For example, one question we have asked many people to show us a model for in their template approach is glucose tolerance test. It's so simple and ubiquitous, but hard to model properly, because there is both a time element and a challenge to the subject as well as the measured datum. Solving that and like examples has helped us see that we can probably deal with almost all of GP medicine, and a good deal of pathology and hospital medicine. But - like anything, we can only see so many steps ahead, and the key is to get software built to find the limits of the current design.b) when used to model and capture a wide range of real-life information, either openEHR RM (reference model) or the Archetype definition language or its conceptual underpinnings will be found to have significant gaps, which then need to be plugged, possibly in messy and inelegant ways - that is a common pattern: a beautifully simple idea ends up as a big, sprawling mess (no, I didn't mention Java...).
The current version of things includes many elements aligned to standards (or improvements thereon;-), as well as modelling gained from talking to people who have built and deployed decision support systems and other EHR systems.
I am not saying that either of these thingspublished evaluations are hard to come by; there are several incarnations of GEHR which had (and still have) commercial success; evaluations of the work we did in Australia do have reports associated with them, which the commonwealth owns, and we are not allowed to give out. Probably the most available reports would be on the UCL systems (pre-openEHR but archetype-based) which have been in production for a couple of years or so; I would have to find out what reports are available.
will happen, but their possibility needs to go into the openEHR risk
equation, at least until there is more documented and, ideally,
first-hand experience with it. Regarding documented experience, is there
a bibliography anywhere of published evaluations of actual
implementations of openEHR, or of its predecessor GEHR? That would help.
I've read the theory of openEHR several times, but I'd love to read
about some practical experiences with it in pilot systems.
The openEHR implementations in the works will come online in the next year or so; evaluations of them come during/after that of course.
- thomas beale
