Hi Thomas,
Specific points:
1. My pathology software supports <= and >= in this context but I have not come 
across an automated blood analyser
interface that supports or requires this and in the couple of databases
I looked at (approx. 15X10^6 numeric values over 8 years)  no user has used 
these values.
So probably good for completeness but no apparent use in the real world!

2. The "Inaccurate flag" generally means that for some reason this value should 
be treated with caution
or is unreliable. 
It may in fact be perfectly accurate (as in the haemolysed blood K+ example) 
but not actually a
measure of the defined analyte - i.e. rather than a serum K+ value what was 
measured was 
serum K+ contaminated with Intracellular K+.
Similar issues occur with cold agglutinins (inaccurate values if performed on 
specimen 
not kept at body temp) and Serum glucose measured on a specimen which does not
contain a metabolic inhibitor ("fluoride tube") which has been kept at room 
temp for too long.
At other times it may actually be inaccurate e.g due to failure to calibrate an
analyser correctly.

The fact that the value is unreliable often only becomes apparent after the 
event
e.g. the doctor rings up to query a high normal K+ on a patient whose K+ value
was expected to be low and a visual/microscopic examination of the specimen
reveals haemolysis
In the case of a badly calibrated analyser, statistical analysis of values 
performed routinely may demonstrate
that there has been an unacceptable variability in results or the average result
was significantly higher than expected or
(as happened very memorably in my practice in the Emergency department)
the values over a period of time fail to correlate with the clinical condition 
of the patients.

So it is absolutely necessary to be able to record (and keep) the
value but be able to flag it either at the time or sometime later as 
unreliable/inaccurate.
It is probably not worth (or even in some cases possible) to decide
whether an erroneous result is inaccurate or unreliable.

3. Rules need to be provided as to how such values should be treated when 
comparing
with normal ranges. For example if the normal range is 0-6 and the value given 
is <5 then
this is normal. However if the normal range is 0-3 is this a normal value or 
not?
This can be dealt with by "flavours of null" on a "normality flag".

4. Applications such as graphing, statistics packages etc. need to be aware of
such values and treat them appropriately. Some general guidance/rules around 
this for
developers/users may be appropriate.

Regards
Vince

  ----- Original Message ----- 
  From: Thomas Beale 
  To: openehr-technical at openehr.org 
  Sent: Thursday, March 02, 2006 12:10 AM
  Subject: Re: Pathology numeric values not supported in DV_Quantity



  Just going through the replies we have had on this one.....

    a.. Gerard's point about <5 etc being an exception is not quite right - 
it's very common; it's usually to do with sensitivity of instruments (i.e. 
accuracy), but there are also analytes which are reported as just being over a 
threshold since any number larger than X is fine (e.g. glomerulin, Sam tells 
me). 
    b.. this is not an indication that the data type is really a DV_INTERVAL or 
DV_QUANTITY_RANGE - it is clearly not. When we see "HCO3: <5 mmol/L" we are not 
reporting an interval of 0 - 5 mmol/L, we are reporting a point value somewhere 
in 0-5, but we don't quite know where. 

    c.. Tom Tuddenham's point is also correct. In openEHR, we actually do have 
a data quality marker (I used to work in SCADA as well, and lived with this 
kind of stuff for years!). It is called null_flavour and is defined on the 
ELEMENT class, next to the value attribute, which is the one that holds the 
Quantity that we are talking about (or some other kind of data value in other 
circumstances). Here we have a more fine-grained occurrence of the same 
problem, for slightly different reasons: the instrument or measuring method and 
data communicatoins are working as they should, it's just that either the value 
is too low or high for quantification by the instrument, or else the instrument 
doesn't bother reporting it above or below a certain threshold, since it is 
known that any value above/below is healthy. Nevertheless, we have to treat it 
in a similar way - probably with a flag that indicates 'status' of the value.

    d.. in practical terms we have to deal with the fact that quantities in the 
form of single-sided intervals with <, >, <=, >= can be mixed in with normal 
point value quantities, or replace them, on a per test-result basis. 
    e.. we also have to have a solution that is easily comprehensible in the 
model and for software developers. Allowing INTERVALs to magically replace 
QUANTITY as is done in HL7 is not the way to do it, since there is no clean 
basis in the modelling to do this (i.e. it's not normally possible in OO 
languages - you have to do something quirky to make it happen.); in any case, 
as pointed out above, DV_INTERVAL is not semantically correct in these cases 
anyway. 
  My analysis is that we need to slightly extend DV_QUANTIFIED (supertype of 
DV_COUNT and DV_QUANTITY, as well as all the date/time types), in the way that 
Vince has said (probably Vince worked this solution out years ago;-)...so that 
the semantics are:

    a.. a magnitude

    b.. NEW ATTRIBUTE: a status flag - with the following possible values: 
      a.. > : greater than

      b.. < : less than 
      c.. >= : greater than or equal to (Vince, do we really need this and the 
next one - do you get real values where it is reported like this?) 
      d.. <= : less than or equal to 
      e.. = : exact point value (i.e. the default situation)

      f.. ~ : approximately equal to, i.e. like '=' but with some unknown error

      g.. ? : innaccurate...what does this mean? If it is due to haemolysed 
blood then is it "inaccurate" or is it really just plain "wrong" ("incorrect")? 
    c.. accuracy 
    d.. ..other attributes, depending on subtype

  Adding a flag will be easy in modelling and software terms. What we have to 
do is carefully design the values; Vince has provided what is probably just 
about right, but I would like to be sure - see notes above on the list. Also, 
remember openEHR QUANTIFIED class already has accuracy as a Real - it can be a 
% or absolute value, so that any DV_QUANTIFIED can be created with a +/- 5% or 
whatever. Given this, do we need the '~' flag (maybe we do: maybe there is no 
accuracy data available, and all we can get from a legacy feed is '~')? And 
isn't the "inaccurate" flag (as Vince named it) about something else? As Vince 
said, doing this means more careful data analysis to determine whether a value 
is normal or not, and how it should be graphed. Do we need to take this into 
account in the model in some way - there is already another CR to adjust how 
normal_range is modelled, and we have an is_normal function defined on 
DV_ORDERED (the ancestor of all the Quantity types in openEHR).

  If we can get a bit  more discussion on these details, I think we can fairly 
quickly state what changes are needed and write a CR for them.

  - thomas






  Sam Heard wrote: 
    Hi everyone,

    We want to report an issue that has arisen in data processing in Australia.

    The issue is the somewhat random ability of systems to report a >xx or <yy 
range where a quantity is expected - there are still units and still a normal 
range. This is common with TSH and GFR - but can turn up in unexpected 
instances - e.g. we had a baby with a HCO3 of <5 mmol/L. This can be dealt with 
at present by substituting an interval - but it is a bit wierd as there is 
still a normal range - it kind of works as there is only a lower or upper value 
of the interval and so this single quantity can carry the normal range.

    The point is that it is really a point measurement that is outside the 
range of the measuring device. Also, it means that we will have to have 
archetypes that allow multiple datatypes for all quantities that could 
conceivably be measured in this way.

    The alternative is to consider a DV_QUANTITY_RANGE that inherits from 
DV_QUANTITY - it still has only one value - but now it has the ability to set 
this as the upper or lower value - and also whether this number is included or 
not.

    The advantage is that there would still be a number to graph and this data 
type could always be substituted for a DV_QUANTITY (ie without archetyping).

    I wonder what others think.

    Cheers, Sam

    -- 

    Dr. Sam Heard
    MBBS, FRACGP, MRCGP, DRCOG, FACHI
    CEO and Clinical Director
    Ocean Informatics Pty. Ltd.
    Adjunct Professor, Health Informatics, Central Queensland University
    Senior Visiting Research Fellow, CHIME, University College London
    Chair, Standards Australia, EHR Working Group (IT14-9-2)
    Ph: +61 (0)4 1783 8808
    Fx: +61 (0)8 8948 0215







-- 
___________________________________________________________________________________
CTO Ocean Informatics (http://www.OceanInformatics.biz)
Research Fellow, University College London (http://www.chime.ucl.ac.uk)
Chair Architectural Review Board, openEHR (http://www.openEHR.org)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20060302/13580555/attachment.html>

Reply via email to