Op 23-jul-2007, om 4:51 heeft Sam Heard het volgende geschreven:
> Hi
>
> The question that is clear in my mind is why don't we do this
> now...label things with certainty? One difficulty that is
> immediately apparent is that avoiding risk usually means it is
> irrelevant. If someone says they had an ulcer, no matter how
> uncertain I am, will I use a risky therapy? If they say they got
> nauseated with morphine, will I change my practice at all
> regardless of certainty?
>
> The next issue is rogue measurements - a BP of 200/72 in a healthy
> person. Even if you have no faith in it, it is still something you
> need to take into account. When the person is diagnosed with a
> condition that explains it a few years later, it will look pretty
> silly if someone has decided it is not reliable and computers have
> been working on that basis since.
>
> In openEHR we have tried to give as much contextual information as
> we can - with the information provider being able to be stored for
> each entry - so even if a diagnosis of Crohn's disease gets into
> the record, if a clinician has insufficient evidence, it is
> possible to mark that it came from the patient or relative. This is
> an advance on most current systems. It is akin to a written record
> "He says he has Crohn's disease".
>
> We have modelled 'hard' quality factors in the laboratory archetype
> - it is probably worth looking at for that reason. It is in the
> data (optionally) to indicate if there are any quality issues that
> may affect interpretation. This may be the beginning of something
> that needs modelling more generally in the future but I would be
> interested to hear from people as to what are the candidate
> attributes for this quality issue.
The thing I"m trying to address is about these 'hard' quality
factors, not the 'soft' ones. From my point of view we need to record
those 'hard' factors in order to be able to compute/compare them
against quality criteria in order to create 'trust', but the question
is where? That's why I thought of a device AT. In that situation
device related quality data has to be recorded only once, while it
can be easily linked to 'every' measurement performed with that
device. For every data entry the 'state' of the device an be 'known'
and be taken into account.
If we can do the same thing with a 'cluster' as you suggested, or
otherwise, it's fine with me.
>
> We need to do this in a manner that computers can cope....and make
> sure we don't need any gnomes in the machine. I would like
> openEHR's moto to be (Getting the Gnomes out of the machine!)
I completely agree. My other point is: if a computer copes with this
quality issue there are basically 2 possible outcome: data is of
sufficient quality or data is of insufficient quality (of course this
will depend on the criteria one needs to set on forehand). My other
question is where do we store this 'outcome' of such a quality
assessment. My suggestion is to add to the protocol a 'data quality
field' for that purpose. Furthermore to embed/add the criteria in the
ontology section, so these are clear. Since I'm aware that these
criteria could vary locally these criteria only should be added in a
local specialization of this archetype/template (see previous reply).
The other day I tried to sent another post which contained an
'example' AT in which this field was included. Unfortunately it
bounced since it was too large. So here I'll only provide the
relevant parts of that example AT.
Under the protocol part a data quality element (at1019) is added
which can be used to qualify the data entry.
Under ontology the criteria for assessing data quality are given
(at1020, at 1024, at 1025, at1026, at1027).
protocol matches {
ITEM_TREE[at0011] matches { -- List structure
items cardinality matches {0..*; ordered}
matches {
ELEMENT[at0013] occurrences matches
{0..1} matches { -- Cuff size
value matches {
DV_CODED_TEXT matches {
defining_code
matches {
[local::
at0015,
-- Adult
at0016,
-- Wide adult
at0017,
-- Paediatric
at1008,
-- Thigh
at1009]
-- Neonatal
}
}
}
}
ELEMENT[at1019] occurrences matches
{0..1} matches { -- Data
quality
value matches {
DV_CODED_TEXT matches {
defining_code
matches {
[local::
at1020,
-- Excellent
at1024,
-- Good
at1025,
-- Fair
at1026,
-- Poor
at1027]
-- Unknown
}
}
}
}
>
["at1019"] = <
description = <"*Score for the quality
of the obtained data
during this session (en)">
text = <"*Data quality(en)">
>
["at1020"] = <
description = <"*data measured/obtained
by a qualified person,
with a certified instrument/device thats calibrated against a golden
standard, the measurement error is within a tight bandwidth (<5%) ,
the validity duration of the calibration isnt expired, maintained on
time and by qualified personal(en)">
text = <"*Excellent(en)">
>
["at1024"] = <
description = <"*data measured/obtained
by a qualified person
(this can also be a properly trained citizen), with a certified
instrument/device thats self calibrating, the measurement error is
within a tight bandwidth (<5%) , the machine isnt broke and
functioning well(en)">
text = <"*Good(en)">
>
["at1025"] = <
description = <"*data measured/obtained
by a qualified person
(this can also be a properly trained citizen), with a certified
instrument/device thats self calibrating, the measurement error isnt
within a tight bandwidth (CE marking alone allows measurement errors
>5%, the machine is not broke and functioning well(en)">
text = <"*Fair(en)">
>
["at1026"] = <
description = <"*in all the other
situations(en)">
text = <"*Poor(en)">
>
["at1027"] = <
description = <"*it is unknown how the
data was gathered(en)">
text = <"*Unknown(en)">
Cheers,
Stef
>
> Cheers, Sam
>