this has to be essentially correct, I think. If you think about it, scores (at least well designed ones) are things whose 'questions' have only known answers (think Apgar, GCS etc), each of which has objective criteria that can be provided as training to any basically competent person. When score / scale is captured at clinical point of care, any trained person should convert the observed reality (baby's heartrate, accident victim's eye movements etc) into the same value as any other such person. In theory, a robot could be built to generate such scores, assuming the appropriate sensors could be created.

With 'true' questionnaires, the questions can be nearly anything. For example, my local GP clinical has a first time patient questionnaire containing the question 'have you ever had heart trouble?'. It's pretty clear that many different answers are possible for the same physical facts (in my case, occasional arrhythmia with ventricular ectopics whose onset is caused by stress, caffeine etc; do I answer 'yes'? - maybe, since I had this diagnosed by the NHS, or maybe 'no', if I think they are only talking about heart attacks etc).

My understanding of questionnaires functionally is that they act as a rough (self-)classification / triage instrument to save time and resources of expensive professionals and/or tests.

There is some structural commonality among questionnaires, which is clearly different from scores and scales. One of them is the simple need to represent the text of the question within the model (i.e. archetype or template), whereas this is not usually necessary in models of scores, since the coded name of the item (e.g. Apgar 'heart rate') is understood by every clinician.

Whether there are different types of questionnaires semantically or otherwise, I don't know.

- thomas


On 05/06/2017 09:48, William Goossen wrote:
Hi Heather,

the key difference is that the assessment scales have a scientific validation, leading to clinimetric data, often for populations, but e.g. Apgar and Barthell are also reliable for individual follow up measures.

a simple question, answer, even with some total score, does usually not have such evidence base. I agree that in the data / semantic code representation in a detailed clinical model it is not different.


--
Thomas Beale
Principal, Ars Semantica <http://www.arssemantica.com>
Consultant, ABD Team, Intermountain Healthcare <https://intermountainhealthcare.org/> Management Board, Specifications Program Lead, openEHR Foundation <http://www.openehr.org> Chartered IT Professional Fellow, BCS, British Computer Society <http://www.bcs.org/category/6044> Health IT blog <http://wolandscat.net/> | Culture blog <http://wolandsothercat.net/>
_______________________________________________
openEHR-technical mailing list
openEHR-technical@lists.openehr.org
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org

Reply via email to