Tim Churches wrote:
> Jon David Patrick wrote:
>> To return to Richard's point and the need for further modelling. In working
>> with hospitals we have arrived at a 4 class model of "physician" roles.
>> "Physician" being a surrogate for the  collection of people and
>> occupations that surround that function. These roles help us better define
>> the functions that an information system needs to serve and defines the
>> services that should be provided. The roles and their processing needs
>> are;
>> Clinician - someone working at the point of care - needs are EMF retrieval
>> Researcher - doing analysis of data for improving care - needs are
>> aggregation of data across EMFs
>> Administrator -  doing analysis for the daily operations of the
>> organisation - needs aggregation of data across EMFs
>> Auditor - doing checking against defined standards - mostly aggregation but
>> also retrieval in special circumstances, plus certain types of medical
>> knowledge.
>>
>> This analysis now tells us that a sophisticated system for performing
>> "aggregation" is the primary need of a medical information system - not
>> "retrieval" as is currently the case.
> 
> Not sure that aggregation is the primary requirement, but as a public
> health person and epidemiologist I couldn't agree more that
> "aggregation" is vital and largely missing from existing clinical
> systems, and/or largely overlooked in their implementation and
> deployment. "Slap in some commercial reporting software designed for
> accountants and she'll be right" is the usual approach. It rarely is. I
> constantly see hospital clinical systems which cost hundreds of millions
> of dollars to deploy being installed with very little thought given to
> how to actually record what is wrong with patients in a consistent and
> analysable manner, let alone to tools (and expertise and/or training)
> for aggregate analysis.

Some more thoughts and perhaps grist for Jon's paper.

So many of the touted benefits of "e-health" are contingent on aggregate
"analytics": quality assurance, public health programmes, better
planning, health services research leading to organisational efficiency
gains, medical and biomedical research, even clinical decision support.

To meet these "aggregation" needs, several things are required:

a) Attention to the semantics of the individual clinical record - that
is, the recording demographics, signs, presenting problems, chronic
problems, signs, symptoms, investigation results, diagnoses, procedures,
therapeutics and other interventions in a consistent fashion which
facilitates analysis. Thus close attention to terminologies, code sets
and classifications, and how humans interface or interact with those
terminologies and code sets is *absolutely fundamental* to the provision
of useful aggregate analysis capabilities. The syntactic structure of
medical information - the format in which it is stored and transmitted -
is also important but a bit secondary to the way in which the
information is semantically encoded or otherwise represented in
computable form. But if you look at HL7 specs, or Standards Australia
health informatics standards, or even many NEHTA documents, you see much
close and very detailed attention to syntax and formats but a very
laid-back and laissez-faire approach to the semantic coding of
information - as Ian Haywood points out, the HL7 and AS4700.x specs are
full of phrases like "...or use local code sets". Such an approach is
not very helpful when it comes to aggregate analysis. As an aside, NEHTA
must be congratulated for recognising the importance of a comprehensive
clinical terminology and securing a SNOMED CT license for all of Oz, but
I have grave doubts about their approach to SNOMED CT deployment and
maintenance, which is to try to do all that in-house within NEHTA. Nope,
that's detail work and should be outsourced to  experts. NEHTA's role is
in conducting the orchestra, not trying to be a virtuoso on every
instrument in the orchestra - you're bound to fail if you try the latter.

b) Better analytical tools. The reporting and analysis facilities in
almost all extant clinical information systems, in hospitals, in primary
and community care, in path labs, just about everywhere, are very
distinctly underwhelming and underdone. And add-on reporting and
analysis tools and packages generally don't integrate well, require far
too much effort to set up and use, and in many cases (and statistical
analysis tools are particularly bad in this respect), are much too hard
to use (and are often user-vicious). And many are expensive, although
open-source software has much to offer here, except for ease-of-use -
but that's a soluble problem.

c) There is a lack of training in aggregate analysis. Until they try to
do it, most people think that extracting useful information and insights
from aggregated data is easy. In fact, it is often rather trickier than
it first seems and, like many domains, such as clinical medicine, there
are many pitfalls and pratfalls. You don't need to have a PhD in
biostats to be able to analyse data. But some grasp of fundamentals does
help, and there is a need to integrate such training into CME and other
ongoing education far more than it is. Certainly undergraduate medical
students get a lot more exposure these days to epidemiology and biostats
than I ever did, but there are many older health professionals who have
just picked up a passing and sometime faulty familiarity with p-values,
hazard functions, odds ratios and so on. I am not arguing that formal
qualifications are required in order to analyse data, but some exposure
to grounding principles is very desirable. Online training is the way to
go here, ideally tightly integrated with the next generation of
easier-to-use analytical tools for clinical and population health data.

Tim C
_______________________________________________
Gpcg_talk mailing list
[email protected]
http://ozdocit.org/cgi-bin/mailman/listinfo/gpcg_talk

Reply via email to