Hi Alan,
I work in a doctor's surgery where AI is very much seen as a potential
solution to the problem of overwhelming workload. The doctors are always
struggling to find time to process all the incoming results (blood
tests, x-ray results, urine results, dexa scan results etc). If a result
comes back showing, say, that the patient has a low sodium level, this
could possibly be caused or exacerbated by one of the medications the
patient is on; or maybe by ae wa course of treatment the patient is
having at the hospital; or maybe by a medical condition the patient has
which we already know about. The doctor has to go rooting around in the
patient's notes to find out if any of this applies, and it be a very
time-consuming process. At the moment our computer system tends to flash
up warnings (Low sodium can be caused by loop diuretics!) without
telling you whether this warning applies to this particular patient, and
if so which of their medications it applies to, and what the best
medical advice would be (stop it, reduce the dose, change to a different
diuretic...). But the potential is for AI to do all of those things,
probably within the next five years or so, saving an enormous amount of
human time and potentially human error.
In the hospital environment, AI has already been shown to be more
efficient that humans at processing things like cytology results and
mammogram results. It makes fewer errors, it processes the information
faster, and of course it can keep doing it round the clock and seven
days a week, without needing a break.
It's also the case that when doctors are dealing with patients face to
face it's becoming more and more difficult to remember and apply the
latest guidelines - in the UK we're supposed to operate in accordance
with NICE guidelines, NICE standing for National Institute for Clinical
Excellence. Even something comparatively simple, like prescribing the
right medication for a diabetic, or prescribing the right combination of
inhalers for somebody with Chronic Obstructive Pulmonary Disease, has
got much more complicated in the last couple of years with the
introduction of new medication such as Semaglutide/Luraglutide (for
diabetics) and LABA/LAMA inhalers for COPD. So you have to go and look
up the guidance, or try to find a nice convenient summary of it, because
it's really difficult to remember and changing all the time. Again, AI
could be a big help if it were properly implemented.
This isn't to say that doctors aren't worried about their status and
expertise being eroded. They are. But I think most of them view an
ever-increasing implementation of AI in healthcare settings as
inevitable, and as long as it's implemented in a sympathetic and
user-friendly way I think it will generally be welcomed.
All of this raises bigger questions, of course - about humans and human
interaction being sidelined for the sake of labour-saving and
cost-saving AI implementation. I think we do have to be very careful
about how we introduce this stuff, but I also think that we probably
won't be. If the Department of Health can see ways to increase
efficiency and reduce the wages bill in the NHS by bringing in more AI
then that's what it will do, and if that means a lot of lab technicians
and doctors losing their jobs and a lot of elderly patients finding it
increasingly difficult to get face to face appointments with an actual
real life doctor in an actual surgery, too bad.
On this and the even bigger questions about whether AI itself is going
to start making decisions which sideline or eliminate humans in the
interests of achieving the goals it has been set, I think we urgently
need an ethical framework to be built into the AI industry and into AI
itself. Isaac Asimov suggested the three laws of robotics all the way
back in the 1940s, and that seems to be the kind of thing that's
required. Whether it's possible to comply the whole industry to comply
with those kind of standards is open to doubt, but there are precedents
- in genetic engineering, in the use of propellants that damage the
ozone layer, and in the outlawing of DDT, for example. So maybe it can
be done.
Edward
On 6/2/23 9:07 PM, Alan Sondheim via NetBehaviour wrote:
Hi -
I very rarely forward anything, but this is seriously prescient and I
think portends an AI becoming increasingly ominous in our planetary
future.
Meanwhile AI seems more and more prevalent in the arts; it's worrisome.
I've used it myself but keep returning to the concept of testimony vs
testimony; whatever AI is, it might know where the bodies are, but it
doesn't know what a body is, albeit descriptions increasingly perfected.
Just wonder what everyone here thinks about this.
Thanks, Alan
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test?CMP=Share_AndroidApp_Other
_______________________________________________
NetBehaviour mailing list
[email protected]
https://lists.netbehaviour.org/mailman/listinfo/netbehaviour
_______________________________________________
NetBehaviour mailing list
[email protected]
https://lists.netbehaviour.org/mailman/listinfo/netbehaviour