>I agree with Karsten - there is a basic principle here (and in openEHR). 
>The physician must be able to write what they want. Now...if they want to 
>write a sentence with the words "possible Dengue Fever infection" then the 
>software may be able to code "Dengue Fever" using Snomed or some other 
>ontology; the result would be narrative with key coded terms. Peter 
>Elkin's group at Mayo have shown how they do the reverse (what Gerard 
>Freriks mentioned also) - a post hoc coding & structuring of the text.
>
>Philippe Ameline's Odyssee product on the other hand uses a structured 
>input system for recording endoscopy investigations, and the specialists 
>are happy with it. But - he has a nice, detailed lexicon of terms to draw 
>on, and it seems it has everything they want; when it doesn't, they just 
>write a free text field. Even so, the principle i mention above seems to 
>be preserved.
>
>So - how structured the input I suspect depends more on the type of 
>medicine or specialty than some overarching rule.

Hi,

We sold our first Endoscopic report generator in 1987 - so we now have some 
feed back ;o)

You have to understand that a report from a specialist is primarily 
intended to have the others health professionals know something.
The quality factors here are :
- completness : you should not forget to mention something - a structured 
interface has proven to be a plus compared with free dictation
- process oriented : don't be elusive, "call a cat a cat" (it is a common 
reflex with free dictation to be "not as accurate as it should be")
- terms repeatability : the same aspect should be described in the same way 
- it is a place where the process of reducing the term set is a plus ; 
elsewhere the same aspect will be described in very different ways - even 
in the same team.
- defined corpus : if you know the corpus, you also know what could have 
been said, and has not been said (not easy however - but this concept is 
certainly in the Archetypes)
- ready now : the report should leave with the patient

It might be a french drawback, but very, very few free text reports I have 
seen are ok from this point of view.

I am myself convinced that free text analysis is a dead way, since you 
usually can't structure afterward what has not been structured immediatly.
To give an example, some time ago, I was installing my system in an 
hospital ; an endoscopist just ended an exam, and had hand-written his 
result on a paper. We took it as a learning model to make the report with 
the generator.
Roughly, it was written something as :
"20 cm from the teeth, there is a wide erythematous area"
Let's process it :
"20 cm from the teeth", with this kind of endoscop, we are in the 
oesophagus (not so easy, even for a human being... you must know the size 
of anatomical parts)
The "wide erythematous area" is our lesion.
It is all right, we have the lesion, and we know where it is ; but 
unfortunately, in the report generator, you have to choose between 
"oesophagitis" or simple "mucosa aspect". So the question is "is it an 
oesophagitis ?" (well, it is an interesting question anyway when it will be 
time to give some drugs to the patient - the process oriented part of the 
report).

And we had to ask the question to the guy who saw the lesion, since even 
the network of 5 human brains in the room could not guess it.

I perfectly understand Karsten's point about fuziness. But once again, you 
must behave differently in a local/personnal system and in a collective 
process oriented system. You can publish there is still fuzziness 
somewhere, you can't publish fuzzy datas.

Best regards,

Philippe 

-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org

Reply via email to