Grahame Grieve wrote:

> Further, there is the comment:
>
>> Null values should not be mixed in with the value domains of true
>> data types, a practice which compromises the comprehensibility and
>> computability of data; they should be represented using a distinct data
>> interpretation marker associated with each data value.
>
> I believe that this has merely shunted the problem around, it hasn't
> actually solved anything. 

it solves one thing: it means that no value in a 'value' field is 
anything other than a data value. No special terms set to "unknown", or 
numerics set to -1 or -999999 or whatever. This means that such field 
are more interoperable, since there is no special processing required on 
them.

> The first issue for me is the idea that "attribute values always
> satisfy the type rules of the system". If I don't know the value,
> then I might not be able to provide information in the record to
> meet these requirements. For instance, say that the element is
> a simple integer, and I don't know what the value is. What can I
> place in the element so that it is a valid integer? I worry that
> this will lead to fabrication of data. A possible response to this
> is that you don't represent the element, and it has an implicit
> value of... null? 

the value field will probably wind up with a value of 0, since this is a 
typical default for integers, but it is of no consequence. The second 
field - what I would normally call the "data quality" field (but we call 
it the "null flavour" field to fit in with HL7) is the one you read to 
find out how to interpret the 'value' field. If this second field says 
"unknown", you just ignore the 'value' field. If it says "known" then 
you use it. This is one of the shortcomings with HL7's null flavours by 
the way - it should really include the idea of "known". We get around 
this by allowing the null flavour field to be Void itself, meaning "known".

This is a simple and general approach, and means that there is never any 
special processing.

> The second issue I have is that it is claimed that the introduction
> of a datum interpretation indicator is "safer" for applications.
> I don't see why it's safer - less chance of bugs - to have the option
> of overlooking the data quality indicator rather than being forced
> to deal with it since it's built into the type somehow. You can
> clearly fail to deal with it either way, but you don't so easily
> entirely miss the quality assessment 

you can, but with the data quality indicator approach you always know 
where to look to find out the data quality - there are no special values 
for Integer, Real, String, Coded_text etc to express the various 
possible data qualities. Data quality values have nothing to do with 
data values, and should not be mixed in with them. By the way, we did 
not invent this, it is used on control and monitoring systems, where 
they have to have a data quality marker for data items scanned from 
field devices. This is never mixed in with the data value itself.

There are on the contrary many examples of systems which contain 
software bugs due to misaligned understandings of special values within 
data fields.

> The third issue is the presence of such a datum interpretation
> indicator - will it actually be available when I want it? What
> will I do when I don't have the information but there is no spot
> for my system to say so? How will my system even know that a
> given attribute has a datum interpretation indicator associated
> with it? 

It is in every ELEMENT in the openEHR models.  

> This last question raises another question - where are such datum
> interpretation indicators introduced - in the reference model, or
> the archetypes?

Have a look at the ELEMENT class in the Data structures RM.

> This is part of a bigger question, which I will
> return to later (and is my major concern) - can archetypes actually
> work in practice at all? (but don't try to answer this till I have
> added more to this question later) 

quick answer: yes - we know because we have built the software (an 
Eiffel-based system) to prove it; so has the DSTC (an XML-based system). 
So has the team at UCL (Java/ObjectStore), and so has a company in 
Sydney called Meridian, which makes a system called Obstet, which I have 
personally reviewed. THere are at least 3 other distinct systems that I 
know of.

- thomas


-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org

Reply via email to