Op 20-jul-2007, om 12:26 heeft Thomas Beale het volgende geschreven:
> Stef Verlinden wrote:
>> My additional question is that I want to store that 'local' quality
>> assessment outcome somewhere as well. Therefore my question is, can
>> we add a generic data quality marker/label, which is adapted for to
>> local situation by specializing that archetype.
> Stef, the problem this poses for shared health records is: how to deal
> with local post facto quality markers when the EHR information
> (oriignally from system X) is sent from system A (yours) to system
> B, C,
> and say a national summary record? What if it is sent to a number of
> places? Do we throw away your local quality markers, and let the
> people
> at system B, C etc add their own? Do we keep everyone's local
> annotations, thus accumulating a pile of (presumably differing)
> markers?
> What if infomration from source X is received by both system A and
> system B and they both add their own (different) quality markers? What
> if their versions of the data are then sent to another system D? What
> should system D think?
I think I get your point. The answer could be that we only store the
'hard' quality markers and then compute the data quality locally
according to the local criteria everytime we want to view the data.
The thing I'm struggling with is 3 things:
- were to store those local protocols/ criteria. Do we need to set up
a separate system/ database for that or can we store them in a
'localized' archetype/template. How would one do that with the other
protocols we ('re going to) have to deal with, especially the
protocols dealing with work flow/ clinical pathway.
- we'll have to deal with protocols/ criteria even if you don't want
to. For example the criteria for the diagnoses X may vary locally and
change over time (and this is not an 'exotic' example). So does that
mean that you can't store a diagnoses in the EHR and that you have to
're-evaluate/asses' these everytime you use the EHR from the 'hard'
criteria such as symptoms and clinicial and lab findings (according
to your local criteria)? Thinking about this, is there a place to
store the protocol/ criteria by which a certain diagnosis is 'set'?
- I don't know if I understand this correctly, but in order to
'assess the quality of a certain entry performed f.i. two years ago,
it's necessary to know the exact 'state' of the device at that day.
Is it possible to determine this historic device state 'on the
fly' (i.e. isn't that going to take to much time) or are the other
ways to do that?
Cheers,
Stef
>> Rethinking this, I
>> have to additional thoughts/questions:
>> - if we asses data quality at application level does one need/want to
>> store the outcome?
>> - if one wants to store the outcome is it better to use a
>> 'specialized' template for that.
>>
> I suspect that a special local database should probably be used.....
>> My 'feeling' is that data quality criteria (especially when they're
>> more widely accepted) need to be regarded as a standard/protocol and
>> therefore need to be stored in such a 'specialized' archetype/
>> template.
>>
> I don't think the issue is specialised templates (you can easily store
> such information in an optional part of the protocol section of an
> Entry); the problem is the idea above that quality markers could be
> set
> locally, which in a shared environment means repeated adding of
> markers
> and accumulation of markers.
>> Data quality criteria are not only commonly used but also obliged
>> when for instance developing new drugs (GLP) or do clinical testings
>> (GCP) for new drug approvals. Everybody involved use them and it
>> turns out that everybody sticks to the same criteria, since they're
>> actually quite simple:
>> - is the device used calibrated and maintained properly.
>> - Is the device operated by the 'right' person and in the 'right'
>> environment
>> Then of course there are many variations one these themes, depending
>> on the device, which makes it more work to register, but it won't
>> become more complex.
>>
>> In answer to Thomas, I wouldn't advocate, subjective individual
>> 'assesement/testimonials' of/about data quality without properly
>> defined criteria. If somebody 'suspects' a device isn't working
>> properly one should take action (f.i notify the technical staff and
>> make them re-calibrate the device). I wouldn't like to be responsible
>> for a patients health if I have to work with data collected from a
>> device that's possibly malfunctioning.
>>
> well....that's the same as now - it's just that the information
> wouldn't
> be as widely shared as in an e-health world
>
> One thing we forgot to mention earlier on, is that in openEHR,
> attestations can be recorded on versions - see
> http://www.openehr.org/uml/release-1.0.1/Browsable/
> _9_0_76d0249_1109326589721_134411_997Report.html
>
> Every time a Version (of a Composition) is imported into another
> system,
> an attestation can be added to the ImportedVersion wrapper, so it
> is in
> theory possible to have an accumulating string of quality flags.
> However, it is at a coarse-grained level, and I am doubtful that it is
> useful or desriable to require systems to set such markers or to
> read them.
>
> - thomas
>
>
> _______________________________________________
> openEHR-clinical mailing list
> openEHR-clinical at openehr.org
> http://lists.chime.ucl.ac.uk/mailman/listinfo/openehr-clinical