Re: openEHR-technical Digest, Vol 64, Issue 6

2017-07-06 Thread Thomas Beale

Hi Gerard,

On 07/06/2017 07:46, GF wrote:

Dear Silje,

Questions in questionnaire are observations.
But what is it, when a Scale makes use of existing data in the 
database and calculates an aggregate result?

(E.g. BMI)
Isn’t the latter an evaluation of existing observations by means of a 
rule?


Just on this question - a BMI might be computed, but it's still just a 
datum relating to an 'individual', as ontologists would say, so it's an 
OBSERVATION in openEHR terms. An EVALUATION is a opinion generated by 
comparing fact(s) about an individual to a knowledge based in order to 
classify the individual in some way, e.g. 'overweight'.


In the case of BMI the weight and length are real observable 
properties of a (human) body.
Question: Is the BMI an observable property? I think not. It is an 
aggregate, an evaluation.


not observable in the literal sense, but the view we take in openEHR is 
that an Observation is the apprehension of data relating to the 
individual, by means of examination and / or instruments. A BMI clearly 
falls under this category of information.


It can be the basis for an Evaluation if compared to some BMI normal 
ranges, to generate an Evaluation such as 'overweight', as above - this 
is an inference. A machine might do this.


Aside:
Someone will probably bring up scores like Apgar, and say they are some 
sort of inference, since the constituent scores are based on 
statistical/clinical pictures of what is healthy or not (e.g. HR >= 100 
bpm etc). Philosophically speaking, this is true, and in theory they 
should be an EVALUATION. For practical reasons they are generally 
modelled as OBSERVATIONs, since they tend to act as a means of reporting 
physical examinations (in a quantitative way), and they get used as 
triage variables for determining which treatment path to follow.


In a more perfect world, scores might have their own ENTRY type, but I 
don't think the lack of it has done any harm to date.


- thomas


___
openEHR-technical mailing list
openEHR-technical@lists.openehr.org
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org


Re: openEHR-technical Digest, Vol 64, Issue 6

2017-06-07 Thread Karsten Hilbert
On Mon, Jun 05, 2017 at 05:54:49PM +0100, Thomas Beale wrote:

> With 'true' questionnaires, the questions can be nearly anything. For
> example, my local GP clinical has a first time patient questionnaire
> containing the question 'have you ever had heart trouble?'. It's pretty
> clear that many different answers are possible for the same physical facts
> (in my case, occasional arrhythmia with ventricular ectopics whose onset is
> caused by stress, caffeine etc; do I answer 'yes'? - maybe, since I had this
> diagnosed by the NHS, or maybe 'no', if I think they are only talking about
> heart attacks etc).

And, in fact, the GP may not actually be interested that much
in whether you actually really had any (clinical) *heart*
trouble. After all, quite a few people will list their
esophageal burns or intercostal nerve irritations (which
is NOT a problem !).

What a GP may be after is likely your internal model of your
state of health...

Large swathes of Primary Care have needs way different from
the more restricted areas of health management.

Regards,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

___
openEHR-technical mailing list
openEHR-technical@lists.openehr.org
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org


Re: openEHR-technical Digest, Vol 64, Issue 6

2017-06-07 Thread GF
Dear Silje,

I understand that at the clinical (health care provider) level each wants/needs 
to do its own thing in each unique way.

But:
Modelling Templates using recurring patterns (Archetypes) has a (a lot of) 
value.
Doing the same things the same has value.
In this way we can create an ordered way to query the data safely.

What means ‘the way that best represents the data’?
Is it the way clinicians see it how it is presented?
Or is it the way we can query the best, most safe way?
Requirements for both are not the same.

Questions in questionnaire are observations.
But what is it, when a Scale makes use of existing data in the database and 
calculates an aggregate result?
(E.g. BMI)
Isn’t the latter an evaluation of existing observations by means of a rule?
In the case of BMI the weight and length are real observable properties of a 
(human) body.
Question: Is the BMI an observable property? I think not. It is an aggregate, 
an evaluation.



Gerard   Freriks
+31 620347088
  gf...@luna.nl

Kattensingel  20
2801 CA Gouda
the Netherlands

> On 6 Jun 2017, at 19:34, Bakke, Silje Ljosland 
> <silje.ljosland.ba...@nasjonalikt.no> wrote:
> 
> I agree and disagree.  <>J
>  
> An EHR needs to be able to cope with all kinds of data, “questionnaire” or 
> not. However I’m not so sure a modelling pattern that works for everything 
> that could be labelled a “questionnaire” is achievable, or even useful.
>  
> Modelling patterns are sometimes extremely useful, for instance for 
> facilitating modelling by non-clinicians or newbies, but sometimes they 
> aren’t very practical. One of the problems is that clinical information in 
> itself is messy, because healthcare information doesn’t follow nice semantic 
> rules. Clinical modelling must above all be faithful to the way clinicians 
> need to record and use data, not to a notion of semantically “pure” models.
>  
> Finding “sweet spots” by identifying patterns that are sensible, logical, and 
> above else *work* for recording actual clinical information is often an 
> excruciatingly slow process of trial and error, exemplified by the substance 
> use summary EVALUATION and the physical examination CLUSTER patterns of 
> modelling, which both had taken years of trial and error long before I got 
> involved in them.
>  
> If we can find patterns across some kinds of “questionnaires”, like clinical 
> scores, great! However, since there isn’t a standardised pattern for paper 
> questionnaires, it’s not likely that it’s possible to make one for electronic 
> questionnaires. Outside the RM/AOM, a generic pattern archetype for every 
> questionnaire with variable levels of nesting, variable data points, etc 
> isn’t possible, nor would it in my opinion be useful. It would put all the 
> modelling load on the template modellers, which arguably would be more work 
> than modelling the same structures as made-for-purpose archetypes.
>  
> Some rules of thumb have developed over time though:
> 1.   Model the score/assessment/questionnaire in the way that best 
> represents the data
> 2.   Use the most commonly used name for identifying it
> 3.   Model them as OBSERVATION archetypes, unless they’re *clearly* 
> attributes of f.ex. diagnoses, in which case they should be CLUSTERs 
> (example: AO classification of fractures)
> 4.   Make sure to get references that support the chosen structure and 
> wording into the archetypes
>  
> In my opinion this pragmatic approach is likely to capture the data 
> correctly, while at the same time minimising overall modelling workload.
>  
> Regards,
> Silje
>  
> From: openEHR-technical [mailto:openehr-technical-boun...@lists.openehr.org 
> <mailto:openehr-technical-boun...@lists.openehr.org>] On Behalf Of GF
> Sent: Tuesday, June 6, 2017 3:58 PM
> To: For openEHR clinical discussions <openehr-clini...@lists.openehr.org 
> <mailto:openehr-clini...@lists.openehr.org>>
> Cc: Thomas Beale <openehr-technical@lists.openehr.org 
> <mailto:openehr-technical@lists.openehr.org>>
> Subject: Re: openEHR-technical Digest, Vol 64, Issue 6
>  
> I agree.
> ‘questionnaire’ is many things, but not at the same time.
>  
> In any case any EHR needs to be able to cope with all kinds.
> From ones with one or more qualitative results: such as the checklist
> To the validated Score where individual results are aggregated in one total 
> score.
>  
> It must be possible to create one pattern that can deal with all kinds.
>  
> 
> Gerard   Freriks
> +31 620347088
>   gf...@luna.nl <mailto:gf...@luna.nl>
>  
> Kattensingel  20
> 2801 CA Gouda
> the Netherlands
>  
> On 6 Jun 2017, at 14:46, Vebjørn Arntzen <varnt...@ous-hf.no 
> <mailto:varnt...@ous-h

RE: openEHR-technical Digest, Vol 64, Issue 6

2017-06-06 Thread Bakke, Silje Ljosland
I agree and disagree. ☺

An EHR needs to be able to cope with all kinds of data, “questionnaire” or not. 
However I’m not so sure a modelling pattern that works for everything that 
could be labelled a “questionnaire” is achievable, or even useful.

Modelling patterns are sometimes extremely useful, for instance for 
facilitating modelling by non-clinicians or newbies, but sometimes they aren’t 
very practical. One of the problems is that clinical information in itself is 
messy, because healthcare information doesn’t follow nice semantic rules. 
Clinical modelling must above all be faithful to the way clinicians need to 
record and use data, not to a notion of semantically “pure” models.

Finding “sweet spots” by identifying patterns that are sensible, logical, and 
above else *work* for recording actual clinical information is often an 
excruciatingly slow process of trial and error, exemplified by the substance 
use summary EVALUATION and the physical examination CLUSTER patterns of 
modelling, which both had taken years of trial and error long before I got 
involved in them.

If we can find patterns across some kinds of “questionnaires”, like clinical 
scores, great! However, since there isn’t a standardised pattern for paper 
questionnaires, it’s not likely that it’s possible to make one for electronic 
questionnaires. Outside the RM/AOM, a generic pattern archetype for every 
questionnaire with variable levels of nesting, variable data points, etc isn’t 
possible, nor would it in my opinion be useful. It would put all the modelling 
load on the template modellers, which arguably would be more work than 
modelling the same structures as made-for-purpose archetypes.

Some rules of thumb have developed over time though:

1.   Model the score/assessment/questionnaire in the way that best 
represents the data

2.   Use the most commonly used name for identifying it

3.   Model them as OBSERVATION archetypes, unless they’re *clearly* 
attributes of f.ex. diagnoses, in which case they should be CLUSTERs (example: 
AO classification of fractures)

4.   Make sure to get references that support the chosen structure and 
wording into the archetypes

In my opinion this pragmatic approach is likely to capture the data correctly, 
while at the same time minimising overall modelling workload.

Regards,
Silje

From: openEHR-technical [mailto:openehr-technical-boun...@lists.openehr.org] On 
Behalf Of GF
Sent: Tuesday, June 6, 2017 3:58 PM
To: For openEHR clinical discussions <openehr-clini...@lists.openehr.org>
Cc: Thomas Beale <openehr-technical@lists.openehr.org>
Subject: Re: openEHR-technical Digest, Vol 64, Issue 6

I agree.
‘questionnaire’ is many things, but not at the same time.

In any case any EHR needs to be able to cope with all kinds.
From ones with one or more qualitative results: such as the checklist
To the validated Score where individual results are aggregated in one total 
score.

It must be possible to create one pattern that can deal with all kinds.


Gerard   Freriks
+31 620347088
  gf...@luna.nl<mailto:gf...@luna.nl>

Kattensingel  20
2801 CA Gouda
the Netherlands

On 6 Jun 2017, at 14:46, Vebjørn Arntzen 
<varnt...@ous-hf.no<mailto:varnt...@ous-hf.no>> wrote:

Hi all

To me a "questionnaire" is a vague notion. There can be a lot of different 
"questionnaires" in health. From the GP's in Thomas's example to a Apgar score, 
to a clinical guideline and even a checklist. Those are all a set of "questions 
and answers", but the scope and use is totally different. In paper 
questionnaires we will find a mix of many, maybe all, of those, crammed into 
what the local practice have found to be useful (= "Frankenforms"). To try to 
put all of them into a generic questionnaire-archetype is of no use.

Examples:
The GP questionnaire referred to by Thomas is in the quoted question about 
"ever had heart trouble" merely a help for the GP, and of little use for 
computation. But if it is supplemented by more specific questions, based on 
answers by the individual, then the final result can be "occasional arrhythmia 
with ventricular ectopics", which is a relevant information for later use and 
should be put into a relevant archetype. So is it a "questionnaire" or a 
guideline for the consultation? Not relevant IMO, it's the content, that's 
relevant.

Patients with haemophilia in Oslo university hospital are offered a 
questionnaire online to register whether they've had incidents of bleeding, 
what caused it, if they needed medications and if so, the batchnumber of the 
medication. This is followed up by the staff both for reporting of used 
medication, and for the patients next follow-up out-patient control or 
admission. Questionnaire or not? Not relevant – it's what the information is 
and what it is for, that is important. Find relevant archetypes for them, 
OBSERVATIONS or ADMIN-ENTRY f

Re: openEHR-technical Digest, Vol 64, Issue 6

2017-06-06 Thread GF
I agree.
‘questionnaire’ is many things, but not at the same time.

In any case any EHR needs to be able to cope with all kinds.
From ones with one or more qualitative results: such as the checklist
To the validated Score where individual results are aggregated in one total 
score.

It must be possible to create one pattern that can deal with all kinds.


Gerard   Freriks
+31 620347088
  gf...@luna.nl

Kattensingel  20
2801 CA Gouda
the Netherlands

> On 6 Jun 2017, at 14:46, Vebjørn Arntzen <varnt...@ous-hf.no> wrote:
> 
> Hi all
>  
> To me a "questionnaire" is a vague notion. There can be a lot of different 
> "questionnaires" in health. From the GP's in Thomas's example to a Apgar 
> score, to a clinical guideline and even a checklist. Those are all a set of 
> "questions and answers", but the scope and use is totally different. In paper 
> questionnaires we will find a mix of many, maybe all, of those, crammed into 
> what the local practice have found to be useful (= "Frankenforms"). To try to 
> put all of them into a generic questionnaire-archetype is of no use.
>  
> Examples:
> The GP questionnaire referred to by Thomas is in the quoted question about 
> "ever had heart trouble" merely a help for the GP, and of little use for 
> computation. But if it is supplemented by more specific questions, based on 
> answers by the individual, then the final result can be "occasional 
> arrhythmia with ventricular ectopics", which is a relevant information for 
> later use and should be put into a relevant archetype. So is it a 
> "questionnaire" or a guideline for the consultation? Not relevant IMO, it's 
> the content, that's relevant.
>  
> Patients with haemophilia in Oslo university hospital are offered a 
> questionnaire online to register whether they've had incidents of bleeding, 
> what caused it, if they needed medications and if so, the batchnumber of the 
> medication. This is followed up by the staff both for reporting of used 
> medication, and for the patients next follow-up out-patient control or 
> admission. Questionnaire or not? Not relevant – it's what the information is 
> and what it is for, that is important. Find relevant archetypes for them, 
> OBSERVATIONS or ADMIN-ENTRY for this, I guess.
>  
> Even checklists are a set of questions and answers. "Have you remembered to 
> fill out the diagnosis?". "Is there a need to offer the patient help to deal 
> with the cancer diagnosis?". Main thing is to analyze what the resulting 
> answer is representing, and the use of it. Decision support? Clinically 
> relevant? Merely a reminder? Put them into a template, using appropriate 
> archetypes.
>  
>  
> Regards, Vebjørn
>  
> Fra: openEHR-clinical [mailto:openehr-clinical-boun...@lists.openehr.org 
> <mailto:openehr-clinical-boun...@lists.openehr.org>] På vegne av Thomas Beale
> Sendt: 5. juni 2017 18:55
> Til: For openEHR technical discussions; For openEHR clinical discussions
> Emne: Re: openEHR-technical Digest, Vol 64, Issue 6
>  
>  
> 
> this has to be essentially correct, I think. If you think about it, scores 
> (at least well designed ones) are things whose 'questions' have only known 
> answers (think Apgar, GCS etc), each of which has objective criteria that can 
> be provided as training to any basically competent person. When score / scale 
> is captured at clinical point of care, any trained person should convert the 
> observed reality (baby's heartrate, accident victim's eye movements etc) into 
> the same value as any other such person. In theory, a robot could be built to 
> generate such scores, assuming the appropriate sensors could be created.
> 
> With 'true' questionnaires, the questions can be nearly anything. For 
> example, my local GP clinical has a first time patient questionnaire 
> containing the question 'have you ever had heart trouble?'. It's pretty clear 
> that many different answers are possible for the same physical facts (in my 
> case, occasional arrhythmia with ventricular ectopics whose onset is caused 
> by stress, caffeine etc; do I answer 'yes'? - maybe, since I had this 
> diagnosed by the NHS, or maybe 'no', if I think they are only talking about 
> heart attacks etc).
> 
> My understanding of questionnaires functionally is that they act as a rough 
> (self-)classification / triage instrument to save time and resources of 
> expensive professionals and/or tests.
> 
> There is some structural commonality among questionnaires, which is clearly 
> different from scores and scales. One of them is the simple need to represent 
> the text of the question within the model (i.e. archetype or template), 
> whereas this is not usu

Re: openEHR-technical Digest, Vol 64, Issue 6

2017-06-05 Thread Thomas Beale


this has to be essentially correct, I think. If you think about it, 
scores (at least well designed ones) are things whose 'questions' have 
only known answers (think Apgar, GCS etc), each of which has objective 
criteria that can be provided as training to any basically competent 
person. When score / scale is captured at clinical point of care, any 
trained person should convert the observed reality (baby's heartrate, 
accident victim's eye movements etc) into the same value as any other 
such person. In theory, a robot could be built to generate such scores, 
assuming the appropriate sensors could be created.


With 'true' questionnaires, the questions can be nearly anything. For 
example, my local GP clinical has a first time patient questionnaire 
containing the question 'have you ever had heart trouble?'. It's pretty 
clear that many different answers are possible for the same physical 
facts (in my case, occasional arrhythmia with ventricular ectopics whose 
onset is caused by stress, caffeine etc; do I answer 'yes'? - maybe, 
since I had this diagnosed by the NHS, or maybe 'no', if I think they 
are only talking about heart attacks etc).


My understanding of questionnaires functionally is that they act as a 
rough (self-)classification / triage instrument to save time and 
resources of expensive professionals and/or tests.


There is some structural commonality among questionnaires, which is 
clearly different from scores and scales. One of them is the simple need 
to represent the text of the question within the model (i.e. archetype 
or template), whereas this is not usually necessary in models of scores, 
since the coded name of the item (e.g. Apgar 'heart rate') is understood 
by every clinician.


Whether there are different types of questionnaires semantically or 
otherwise, I don't know.


- thomas


On 05/06/2017 09:48, William Goossen wrote:

Hi Heather,

the key difference is that the assessment scales have a scientific 
validation, leading to clinimetric data, often for populations, but 
e.g. Apgar and Barthell are also reliable for individual follow up 
measures.


a simple question, answer, even with some total score, does usually 
not have such evidence base. I agree that in the data / semantic code 
representation in a detailed clinical model it is not different.




--
Thomas Beale
Principal, Ars Semantica 
Consultant, ABD Team, Intermountain Healthcare 

Management Board, Specifications Program Lead, openEHR Foundation 

Chartered IT Professional Fellow, BCS, British Computer Society 

Health IT blog  | Culture blog 

___
openEHR-technical mailing list
openEHR-technical@lists.openehr.org
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org

Re: openEHR-technical Digest, Vol 64, Issue 6

2017-06-05 Thread GF


Gerard   Freriks
+31 620347088
  gf...@luna.nl

Kattensingel  20
2801 CA Gouda
the Netherlands

> On 5 Jun 2017, at 10:48, William Goossen  wrote:
> 
> Hi Heather,
> 
> the key difference is that the assessment scales have a scientific 
> validation, leading to clinimetric data, often for populations, but e.g. 
> Apgar and Barthell are also reliable for individual follow up measures.

Correct.
But in essence it is a set of questions and answers plus a set of rules to 
aggregate the collected data.




> 
> a simple question, answer, even with some total score, does usually not have 
> such evidence base. I agree that in the data / semantic code representation 
> in a detailed clinical model it is not different.

As you write yourself.

> 
> Hence, also Grahame's nonsense comment on the value of semantic 
> interoperability of such things. It is for user groups of stakeholders to 
> determine the clinical and scientific merits of such instruments not a 
> technical implementer.
> 

Yes. Local players must decide what they need.
But there is an interoperability issue, as well.
It is very likely that for research many years  from now we need to be able to 
interpret the old data.
In other words we need interoperability over longer periods of time, or better 
interpretability over longer periods of time.



> 
> vriendelijke groeten, with kind regards,
> 
> dr. William Goossen

___
openEHR-technical mailing list
openEHR-technical@lists.openehr.org
http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org

Re: openEHR-technical Digest, Vol 64, Issue 6

2017-06-05 Thread William Goossen

Hi Heather,

the key difference is that the assessment scales have a scientific 
validation, leading to clinimetric data, often for populations, but e.g. 
Apgar and Barthell are also reliable for individual follow up measures.


a simple question, answer, even with some total score, does usually not 
have such evidence base. I agree that in the data / semantic code 
representation in a detailed clinical model it is not different.


Hence, also Grahame's nonsense comment on the value of semantic 
interoperability of such things. It is for user groups of stakeholders 
to determine the clinical and scientific merits of such instruments not 
a technical implementer.



vriendelijke groeten, with kind regards,

dr. William Goossen

directeur Results 4 Care B.V.
De Stinse 15
3823 VM Amersfoort
the Netherlands
phone: +31654614458
e-mail: wgoos...@results4care.nl
dcmhelpd...@results4care.eu
skype: williamgoossenmobiel
kamer van koophandel: 32133713
http://www.results4care.nl
http://www.results4care.eu
http://www.linkedin.com/company/711047
https://www.researchgate.net/profile/William_Goossen2
https://www.linkedin.com/in/williamgoossen

On 5-6-2017 09:55, openehr-technical-requ...@lists.openehr.org wrote:

Send openEHR-technical mailing list submissions to
openehr-technical@lists.openehr.org

To subscribe or unsubscribe via the World Wide Web, visit

http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.openehr.org

or, via email, send a message with subject or body 'help' to
openehr-technical-requ...@lists.openehr.org

You can reach the person managing the list at
openehr-technical-ow...@lists.openehr.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of openEHR-technical digest..."


Today's Topics:

1. RE: openEHR-technical Digest, Vol 64, Issue 4 (Heather Leslie)
2. RE: openEHR-technical Digest, Vol 64, Issue 4 (Pablo Pazos)


--

Message: 1
Date: Mon, 5 Jun 2017 06:51:39 +
From: Heather Leslie 
To: For openEHR technical discussions

Subject: RE: openEHR-technical Digest, Vol 64, Issue 4
Message-ID:



Content-Type: text/plain; charset="us-ascii"

Hi William,

I can concede that for those examples.

Honestly I'm not particularly fussed about categorisation of the examples and 
there are plenty of examples which use a question/answer format with a total 
score at the end, so it is not clear if we should call it a questionnaire or a 
scale.

The principles that I've laid out remain the same.

Regards

Heather

-Original Message-
From: openEHR-technical [mailto:openehr-technical-boun...@lists.openehr.org] On 
Behalf Of William Goossen
Sent: Monday, 5 June 2017 4:45 PM
To: openehr-technical@lists.openehr.org
Subject: Re: openEHR-technical Digest, Vol 64, Issue 4

The examples given as Glasgow Coma Scale and Barthel index are definitely NOT 
questionnaires. These are assessment scales and require quite a different 
approach than the also extremely useful questions and answers.

Vriendelijke groet,

Dr. William Goossen

Directeur Results 4 Care BV
+31654614458


Op 5 jun. 2017 om 08:25 heeft openehr-technical-requ...@lists.openehr.org het 
volgende geschreven:

Send openEHR-technical mailing list submissions to
openehr-technical@lists.openehr.org

To subscribe or unsubscribe via the World Wide Web, visit

http://lists.openehr.org/mailman/listinfo/openehr-technical_lists.open

ehr.org

or, via email, send a message with subject or body 'help' to
openehr-technical-requ...@lists.openehr.org

You can reach the person managing the list at
openehr-technical-ow...@lists.openehr.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of openEHR-technical digest..."


Today's Topics:

   1. Re: Questionnaires (Grahame Grieve)
   2. RE: Questionnaires (Heather Leslie)


--

Message: 1
Date: Mon, 5 Jun 2017 15:59:26 +1000
From: Grahame Grieve 
To: For openEHR technical discussions

Cc: For openEHR clinical discussions

Subject: Re: Questionnaires
Message-ID:



Content-Type: text/plain; charset="utf-8"

hi Heather


A generic question/answer pattern is next to useless -
interoperability

is really not helped

I think you should rather say "A generic question/answer pattern is
only useful for exchanging the questions and answers, and does not
allow re-use of data". This is not 'next to useless for
interoperability', just not fit for any wider purpose

Grahame


On Mon, Jun 5,