>Using path-based blobbing probably isn't a million miles from such DBs.
Personally I used a wonderful object database called Matisse (still around
today), which essentially operates as a graph db with write-once semantics,
and I would love to have a side-project to build an openEHR system on that.

I'll talk to you off line sometime about that. I can tell you from my own
experience that it might not be as forbidding as one would think. The more
I've examined the archetypes and seen how they are linked and the linkage
rules are defined, the more excited I've become. There's definitely a way
to do this.

>Nevertheless, there are a couple of container levels that have
significance in models like openEHR, 13606, CDA and so on: the Composition
(can be seen as a Document) and the Entry (the clinical statement level).
So it's not completely mad to do blobbing at these levels, or build in
other assumptions around them.

I agree it's not "completely mad" to use your approach partly because you
really do need the information in chunks that smell and feel like
parchments. But what if you could have it both ways?

>ah - don't blame me for it. I added some engineering understanding and
integration along the way, but this work started with a bunch of very smart
clinical people who gathered the best set of requirements for the 'EHR'
concept, during the Good European Health Record project. One of them, Dr
Dipak Kalra (now head of department of CHIME at UCL), wrote his PhD
thesis<http://eprints.ucl.ac.uk/1584/> on
EHR requirements, and one outcome of that was the ISO 18308 standard, on
the same topic. Sam Heard and other physicians were key in developing these
requirements and the understanding they have given to the domain have
greatly affected the quality of the development. This plus numerous
technical people, debates, conferences etc have led to the
specifications<http://www.openehr.org/programs/specification/releases/currentbaseline>
you
see today. Have a look at the revision histories, particularly on the EHR
IM and Data types - you'll see a lot of names.

Of course you'd say that. I've looked at the names. And using a bit of
networking logic, it's not hard to deduce who has been at the center of it
all--and doing much of the writing. But yes, there were also others, and
you know better than I how it all actually balances out.

Thanks for the considerable time you've spent answering my questions.

Randy


On Tue, Apr 16, 2013 at 4:46 PM, Thomas Beale <
thomas.beale at oceaninformatics.com> wrote:

>  On 16/04/2013 18:55, Randolph Neall wrote:
>
> Hi Thomas,
>
>  Again, you've advanced my grasp of openEHR.
>
>  >the change set in openEHR is actually not a single Composition, it's a
> set of Composition Versions, which we call a 'Contribution'. Each such
> Version can be: a logically new Composition (i.e. a Version 1), a changed
> Composition (version /= 1) or a logical deletion (managed by the Version
> lifecycle state marker). So a visit to the GP could result in the following
> Contribution to the EHR system:
>
>    - a new Composition containing the note from the consultation
>    - a modified Medications persistent Composition
>    - a modified Care Plan persistent Composition.
>
> Your comment here is in the context of persistent Compositions, and I
> think what you're saying is that these are a special case: persistent
> Compositions, unlike event Compositions, contain only *one* kind of
> persistent information, and no event information, thus allowing clean
> substitutions when that persistent information is later updated. This would
> avert the horrible scenario I suggested, involving updating heterogeneous
> persistent Compositions. If I'm grasping you, this makes perfect sense.
>
>
> to be 100% clear: the change set versioning model works for all
> Composition types - a single change set (what we call a Contribution) can
> contain versions of persistent and even Compositions.
>
> Semantically, your understanding above is correct: persistent Compositions
> are always dedicated to a single kind of information, usually some kind of
> 'managed list' like 'current medications', 'vaccinations' etc.
>
>
>
>  >Systems do have to be careful to create references that point to
> particular versions.
>
>  Does that mean that tracing a web of connections with current relevance
> requires systems to present invalidated Compositions to users? Or are the
> links themselves revised to point to the replacement Compositions?
>
>
> Normally when a Composition is committed (within a Contribution) and it
> contains a LINK or DV_EHR_URI, that link points to the logical 'latest
> available' target. So the link is always valid. Such a link might point to
> e.g. a lab result Event Composition. The assumption is that the only
> changes to a lab result are corrections or in the case of microbiology and
> some other long period tests, updates - but essentially the latest
> available version = the result.
>
> On the other hand, a link to a care plan might easily point to the care
> plan (usually a persistent Composition) as it was at the moment of
> committal. If the referencing Composition were retrieved, and that link
> dereferenced, an older version of the care plan will be retrieved.
>
>
>   If the latter, how does one avoid having to recommit whole sets revised
> compositions involved in the affected thread of links? It would seem that
> you can't just swap out one item in a tangled web, at least without some
> very sophisticated compensatory activities. Or maybe links are somehow
> named in such a way as always to point to the latest version of something,
> which you seemed to suggest is possible (version-proof links?).
>
>  OpenEHR is a remarkable piece of technology. An EHR record is externally
> a collection of independent and separate documents called Compositions that
> can be invalidated and versioned and swapped out at any time. Yet,
> logically and internally, it is magically a vast graph of nodes and edges
> and vertices, with connections not just within archetypes but also between
> archetypes. Logically, the nodes (typically archetypes) are not deleted
> (usually) nor do they lose their initial identity when their contents
> change or when links between them are altered. One wonders, then, why not
> just use a graph DB instead of a collection of documents to house the
> information? Wouldn't that be a shorter path to the same end and reduce
> some of the versioning complexity (you'd say that would increase versioning
> complexity)? Perhaps there are some openEHR implementations that are doing
> just that. No? Could an openEHR system use a graph DB and still be
> considered openEHR?
>
>
> absolutely. Using path-based blobbing probably isn't a million miles from
> such DBs. Personally I used a wonderful object database called Matisse
> (still around today), which essentially operates as a graph db with
> write-once semantics, and I would love to have a side-project to build an
> openEHR system on that.
>
> Nevertheless, there are a couple of container levels that have
> significance in models like openEHR, 13606, CDA and so on: the Composition
> (can be seen as a Document) and the Entry (the clinical statement level).
> So it's not completely mad to do blobbing at these levels, or build in
> other assumptions around them.
>
>
>
>  Do you have a picture or map, somewhere, of your metadata graph, or must
> I examine individual achetypes to see all the links between them?
>
>  >there is an emerging set of 'second order' object definitions, that use
> the URI-based referencing approach in very sophisticate ways to represent
> things like care plans, medication histories and so on. I can't point to a
> spec right now, but they will start to appear.
>
>  What is the motivation for that? To increase the granularity of
> externally-referenceable objects? What current problem would this solve?
>
>
> for example: provide a fast retrieval 'map' of all medications, including
> all actions, for some care plan, e.g. chemotherapy.
>
>
>
>  >We need something to keep us off the streets...
>
>  Not a worry for you, sir. I'll embarrass you by letting on here how
> impressed I've been with the raw intellect everywhere evident in what I
> take to be chiefly your creation and the literary talent you have exercised
> in making it all clear. Great work!
>
>
>
> ah - don't blame me for it. I added some engineering understanding and
> integration along the way, but this work started with a bunch of very smart
> clinical people who gathered the best set of requirements for the 'EHR'
> concept, during the Good European Health Record project. One of them, Dr
> Dipak Kalra (now head of department of CHIME at UCL), wrote his PhD 
> thesis<http://eprints.ucl.ac.uk/1584/>on EHR requirements, and one outcome of 
> that was the ISO 18308 standard, on
> the same topic. Sam Heard and other physicians were key in developing these
> requirements and the understanding they have given to the domain have
> greatly affected the quality of the development. This plus numerous
> technical people, debates, conferences etc have led to the 
> specifications<http://www.openehr.org/programs/specification/releases/currentbaseline>you
>  see today. Have a look at the revision histories, particularly on the
> EHR IM and Data types - you'll see a lot of names.
>
> - thomas
>
>
> _______________________________________________
> openEHR-technical mailing list
> openEHR-technical at lists.openehr.org
>
> http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/pipermail/openehr-technical_lists.openehr.org/attachments/20130416/04ffeaeb/attachment.html>

Reply via email to