Hi folks, Here are my thoughts. The accuracy of an observation depends on many variables that may or may not be quantifiable. It could be effected by the man, the machine, the methodology used, personal bias or even deliberate attempts at falsification. So end of the day what do we, as physicians, do? I can tell you what I do. I base my trust on experience - my experience with the results that I get, over the months or years, from a Lab, specialist of paramedic. and although nothing is absolute, I find that results from certified Labs do turn out to be more consistant than those done by smaller wayside labs. What I'm stating applies to more than 60% of the population that lives in countries where certification of Labs and regular monitoring is not the norm.
Therefore we could add a field for the Lab's certification. End of the day, most doctors do not go on Lab results alone; they depend on their own clinical assessment too, like I do. Warm regards, Dr D Lavanian MBBS, MD Vice President - Software Division AxSys Healthtech Ltd Mobile: +91-9949902800 ----- Original Message ----- From: Stef Verlinden <[email protected]> Date: Tuesday, July 10, 2007 8:56 pm Subject: Re: Data quality questions/ proposal To: For openEHR clinical discussions <openehr-clinical at openehr.org> > > Op 10-jul-2007, om 14:06 heeft Thomas Beale het volgende geschreven: > > > > > We had many discussions over the years on this topic in openEHR, > > > and eventually decided to get rid of built in flags to do with > > certainty. There are ways to express certainty (see below) but > we > > need to understand the problem first. This is our current > > understanding. > > there is a concept of 'accuracy' which is the level of error in > > instruments, other objective measuring mechanisms. This is > modelled > > in the Quantity data types, see the Data Types IM (http:// > > svn.openehr.org/specification/TAGS/Release-1.0.1/publishing/ > > architecture/rm/data_types_im.pdf) for a detailed discussion. > > Mathematically accuracy defines a confidence interval into which > > > the value on a dial, readout is in; mechanically it might > > correspond to some tolerances in manufacture etc. The main thing > is > > that 'accuracy' is a concept of objective error. This concept > seems > > to apply pretty much to quantitative values, including ordinals, > > > and is this accommodated in the Quantity package. There is also > a > > magnitude_status attribute which allows ~, <, >, <=, >= to be > used, > > since some lab software generates such markers. > > there is a concept of 'confidence' in a clinical judgement, e.g. > > > being 75% sure about a diagnosis (which is really a differential > > > diagnosis), risk of a problem occurring in a patient etc. This > > information can be archetyped in an Evaluation archetype, and > there > > is no standard for it - some people use percentages, some use > > confidence - very low/low/medium/high/very high, there are other > > > schemes, and there have been studies on how terms like > high/medium/ > > low correspond to percentages - which show that doctors don't > all > > equate them to the same numbers. > > either of these can be adjusted in restrospect, using versioning > > > (built in to openEHR semantics). > > After some 5+ years of debating this issue in a very wide > context, > > including on these lists, I think the current approach is > probably > > about right, but some may not be surprised to know that some of > us > > were just as certain years ago of the need for a 'flag' as you > are > > now...but the more you try using it, the less it works. So we > have > > restricted such a flag + accuracy to Quantity types, and > otherwise > > confidence is archetyped. > > > > hope this helps. > > I don't want to be offensive but it doesn't. This is a probably > correct but for me way to technical answer to a practical question. > > As a clinician I need to be able to establish that de a certain > value > that is send to me is reliable. Therefore I need not only to know > who > measured this data but also which device was used and was this > device > properly calibrated and maintained at the time of measurement, was > > the performer properly trained to do the measurement en was the > meausurement protocol followed. I know blood pressure isn't the > best > example so lets take a more critical one like blood clotting time. > > All this I want to check against our 'internal' protocol so that I > > know whether I can use this data or discard it. > This is one of the major reasons (at least in the Netherlands) > that > medical specialist discard data that already gathered by a GP > (everybody has to go to a GP first in the Netherlands. The GP > examines a patient and might perform some additional tests/ > measurements before he/she decides to refer to a specialist). The > other major reason is that data can't be exchanged between the > different systems.. One of the reasons that a EHR is promoted, is > that data can be shared and re-used. Especially the latter is of > great importance since many decisions makers firmly believe that's > > the way to controls the expanding costs of the system. If we can > share data thanks to openEHR but still the data quality remains > questionable (and yes this has all to do with trust, but also with > > protecting ones income/ position) people will use that argument to > > reject the data and remain working the old fashioned way. > > What I'm looking for is a practical way to establish this > situation. > Any suggestions? > > Cheers, > > Stef > > > > > > > > > - thomas beale > > > > Ian McNicoll wrote: > >> Hi Stef, > >> > >> > >> > >> Very interesting and, in principle, quite correct but I am not > >> sure how practicable this would be in the real world. There are > so > >> many variables that might determine whether a Blood pressure is > > >> either accurate or appropriate, including the purpose for which > > >> the historical blood pressure reading is being monitored or > >> reviewed. For instance a series of post-op Blood pressures may > be > >> accurately taken but be quite inappropriate to use for long > term > >> hypertension monitoring. > >> > >> > >> > >> The other problem is that you can only make some of these > measures > >> of data quality in retrospect e.g. You may have a BP device > that > >> has been calibrated correctly but later malfunctions. Or you > may > >> have a seemingly competent patient who turns out to have been > >> messing up the process. > >> > >> > >> > >> I can see some value in a simple flag (defaulting to false) to > >> identify BP readings that should not be used for monitoring > >> purposes because they are known to have quality issues or were > >> taken in inappropriate circumstances e.g post-op or severe > illness > >> but I think your data quality markers may be too complex to be > >> workable. > >> > >> > >> > >> Regards, > >> > >> > >> > >> Ian > >> > >> > >> > >> McNicoll Medical Informatics > >> > >> > >> > >> From: Stef Verlinden [mailto:stef at vivici.nl] > >> Sent: 10 July 2007 10:42 > >> To: For openEHR clinical discussions > >> Subject: Data quality questions/ proposal > >> > >> > >> > >> One of the major requirements we have is what I call a ?data > >> quality marker?. So the blood pressure recorded is 88/124 but > what > >> is the ?value/ quality? of this measurement. > >> > >> IMHO any recorded value is useless unless the quality of this > >> measurement can be established and taken into account when > >> interpreting the data > >> > >> > >> > >> In order to establish this data quality we need to add some > >> attributes to the observation archetypes used to record such > >> measurement. > >> > >> > >> > >> So far as we can see now we think that these attributes are a > data > >> quality field and a device/instrument reference (which requires > a > >> device archetype) and this is what we would like to propose to > the > >> community. > >> > >> > >> > >> Since I don?t know exactly how to do that and we still have > many > >> unanswered questions I?ll describe what we?re thinking about. > It?s > >> very well possible that these thing are already in place, in > that > >> case we?re aren?t aware of that and would like to be pointed in > > >> the right direction. > >> > >> > >> > >> In our ?model? data quality can be described as: excellent, > good, > >> doubtful and insufficient. > >> > >> > >> > >> Here the first hurdle arises: one needs a protocol to define > what > >> is excellent, good etc. These are probably ?local? criteria, so > > >> the can?t be embedded in a general archetype. > >> > >> Our idea is to create a specialisation of the observation > >> archetype in question, in which the local protocol is attached. > > >> For instance this blood pressure archetype with the local Dutch > > >> data quality criteria would be openEHR-EHR- > >> OBSERVATION.blood_pressure-data_qualityNL.v1.adl > >> > >> > >> > >> To give an example these are the criteria for blood pressure > we?re > >> thinking off: > >> > >> > >> > >> Excellent: > >> > >> data measured/obtained by a qualified healthcare provider, with > a > >> certified instrument/device that?s calibrated against a ?golden > > >> standard?, the measurement error is within a tight bandwidth > >> (<5%), the validity duration of the calibration isn?t expired, > >> maintained on time and by qualified personal > >> > >> (This can?t be met when self-measuring in the home situation) > >> > >> > >> > >> Good: > >> > >> data measured/obtained by a qualified person (this can also be > a > >> properly trained patient/citizen), with a certified (CE marked) > > >> instrument/device that?s self calibrating, the measurement > error > >> is within a tight bandwidth (I.e. machine is approved by the > >> European society of Hypertension (ESH), the machine isn?t broke > > >> and functioning well > >> > >> > >> > >> > >> > >> Poor/ Doubtful > >> > >> data measured/obtained by a qualified person (this can also be > a > >> properly trained patient/citizen), with a certified instrument/ > >> device that?s self calibrating, the measurement error isn?t > within > >> a tight bandwidth (CE marking alone allows measurement errors > >> >7%), the machine isn?t broke and functioning well > >> > >> > >> > >> Insufficient: in all the other situations > >> > >> > >> > >> > >> > >> > >> > >> As a consequence we need to add at least one other attribute: a > > >> reference/ link to the device used. In our opinion there should > be > >> a separate archetype for a device/instrument. In this > archetypes > >> not only the unique identifiers of this device are recorded but > > >> also information about calibration, maintenance etc. etc. So > far > >> as I understand/can see such an archetype doesn?t exist today. > >> > >> Our idea is to use the demographic archetype model for this. In > > >> fact there is already a demographic archetype subtype for > >> ?agents?. So either we extent this subclass so it can be used > for > >> devices or we create a new archetype class for > devices/instruments > >> based on this agent model. > >> > >> > >> > >> Another thing that is already established is the capability of > a > >> healthcare professional. I.e. is this person properly trained > to > >> operate a device/instrument? In that respect I would like to > add > >> similar capabilities for non-health care professionals. In the > >> above case patients/ citizens also can measure their own blood > >> pressure. Before they can do that, they?re trained and > examined. > >> Only then they?re capable of producing ?good quality? (provided > > >> that they meet the other criteria as well) data. > >> > >> > >> > >> > >> > >> Can anybody please comment on this? As stated before it would > be > >> really of great help if we could organise some sort of > ?archetype > >> boot camp? to create an expanding community of clinicians who > know > >> how to create archetypes and harmonize the ?wishes and ideas? > that > >> will come up as soon as more people start creating and using > >> archetypes. > >> > >> > >> > >> Cheers, > >> > >> > >> > >> > >> > >> Stef > >> > >> > >> > >> > >> > >> > >> > >> _______________________________________________ > >> openEHR-clinical mailing list > >> openEHR-clinical at openehr.org > >> http://lists.chime.ucl.ac.uk/mailman/listinfo/openehr-clinical > > > > > > -- > > Thomas Beale > > Chief Technology Officer Ocean Informatics > > > > Chair Architectural Review Board, openEHR Foundation > > Honorary Research Fellow, University College London > > _______________________________________________ > > openEHR-clinical mailing list > > openEHR-clinical at openehr.org > > http://lists.chime.ucl.ac.uk/mailman/listinfo/openehr-clinical > >

