Editorials: Should clinical software be regulated?

http://www.mja.com.au/public/issues/184_12_190606/coi10287_fm.html

Enrico W Coiera and Johanna I Westbrook
MJA 2006; 184 (12): 600-601

New Australian evaluation guidelines will help inform the debate

It takes something like 10 years for a new compound to go from laboratory to clinical trial, and many more before a drug’s safety and efficacy are proven. Why isn’t clinical software — which might check for drug–drug interactions and dosage errors and generate alerts and recommendations to influence prescriber behaviour — treated as rigorously?1 Today, anybody with programming skill could create a rudimentary electronic prescribing package and put it directly onto the desktop of a general practitioner without regulatory approval.

No doubt the stand-alone software in routine clinical use has undergone rigorous evaluation by its developers, but in most countries there is no specific regulation that requires this. Commercial vendors still sometimes sell prescribing systems with significant gaps in functionality.2 Some hospital prescribing systems are even sold devoid of the decision rules that will check for errors or guide prescribing. The expectation is that a hospital drug committee will have expertise in the development and maintenance of computational knowledge bases, an arcane and highly specialised skill set if there ever was one.

Evidence mounts from systematic reviews that there is manifest benefit associated with clinical information technologies.3,4 However, case reports are appearing that indicate clinical software can sometimes cause harm.5 A new debate is building between those who demand that we rapidly introduce new information systems to improve the safety and quality of clinical practice and those whose view is that the evidence supporting its introduction is still wanting, and that, in some situations, there is a real possibility that it may do more harm than good.6

Much of the science on both sides in this debate is questionable. A widely reported article in 2005 identified 22 types of possible medication error risk associated with a clinical order-entry system.7 Clinical outcomes were not measured, and no attempt was made to explore whether these potential errors were the result of a badly designed system. Recently, Han et al reported that a hospital electronic prescribing system produced a statistically significant increase in mortality from about 3% to 7%.8 However, assigning the blame for this startling outcome solely to the software is problematic. Introduction of the software altered traditional work patterns and increased the complexity and time taken to prescribe. Yet the new system was implemented in less than a week — an extremely short time to introduce a complex new organisational process.

On the technology proponents’ side, systematic reviews of decision support systems often try to infer which features are beneficial by lumping together widely dissimilar systems used in very different contexts.4 However, local and sociocultural variables strongly influence the uptake and efficacy of such systems,9 and these are rarely controlled for or quantified in studies, making it hard to interpret this type of systematic review. Further, citing lack of evidence for the value of different software features in a review, when the original studies were never designed to test for these features, does not say much.

What should be done? The process guiding the development and testing of most medical treatments and biomedical instrumentation, including software embedded in or linked to clinical devices, is tightly regulated. In contrast, the development of stand-alone clinical software is not. In Australia, stand-alone decision-support computer programs, such as electronic prescribing systems, are not considered “therapeutic goods” and are not subject to regulation. Similarly, in the United States, software that relies on manual data input and that is not directly used in diagnosis or treatment is usually exempt from the premarket regulatory requirements of the Food and Drug Administration to demonstrate that the device is as safe and as effective as devices already on the market.10

Even if there were strict regulations for clinical software, defining either the process of system development or the knowledge within and behaviours of a system, there is no guarantee that software would be implemented or used safely. Information technology is only one component of health services.9 For the whole system to be safe, certification might have to include the skills of those using the software and the organisational processes within which the software is embedded. Consequently, the most appropriate model of governance over the safety and quality of clinical software is far from clear, and may involve elements of industry self-regulation, legislation and best practice guidance. These models are currently a matter of debate among organisations such as the International Organization for Standardization and the European Committee for Standardization. Locally, the National E-Health Transition Authority is developing basic technical standards for clinical software that should lead to more uniform and better engineered systems, and early work by the General Practice Computing Group examined the broader need for software accreditation. The United Kingdom’s National Programme for IT has moved further — establishing a safety team — and has embedded a safety management approach into its procurement processes.

The Australian Health Information Council recently published national guidelines for the evaluation of electronic clinical decision support systems, to promote evaluation using rigorous and validated methodologies.11 The guidelines recognise that it is difficult to propose a single evaluation methodology that meets the diverse needs of both the software and clinical communities. Different user groups have different evaluation tasks and objectives. Even the choice of evaluation method is sometimes unclear, given the complexities of health services and the limited opportunities to carry out rigorously controlled trials. The guidelines outline approaches to testing the clinical effectiveness of decision support systems, their integration into existing work practices, user acceptability, and technical evaluations of the software and knowledge bases.

Urgent debate is needed to move this agenda forward,12 and these guidelines should provide a platform to inform that debate. We can move quickly to develop appropriate models of governance for clinical software, or we can step back and let the courts decide, when legal cases of negligence occur. Some will argue that regulation inhibits innovation, but there are good examples of regulation driving technology innovation in other industries. The airline industry is often presented to us as a safety role model, but that industry was forced to change only after a string of catastrophic disasters. We can do much better by anticipating the potential risks of these technologies, rather than reacting to mishap. Over the next few years, despite people’s lives being saved or improved by these new systems, some hard lessons may be learned about their safe and effective use.
_______________________________________________
Gpcg_talk mailing list
[email protected]
http://ozdocit.org/cgi-bin/mailman/listinfo/gpcg_talk

Reply via email to