Hi Edward,
I absolutely agree with everything you say here; AI is essential to
medicine and to many other areas in science as well. But the looming that's
looming is rapidly getting out of control - the AI is general, is more or
less accessible to anyone, and is already being used by 'bad actors' - I
see this even in a tiny way on Facebook for example where there's been a
sudden increase in targeted hacks on Fb pages (not to mention duplicate
pages, etc.). I could see a situation where medical computers are
'hardened' - taken off the grid for the most part and more or less
impervious to anything but local hacking. On the other hand, the damage
that's already being done with the new AI is enormous; the machines are
already giving up snippets of their code for example, and I'm sure they're
being employed in conflicts. I'm sure as well any number of countries,
including Russia, will be hurrying to catch up with this. I think the most
reasoned analysis comes from Bruce Schneier, https://www.schneier.com/ -
which is more careful and knowledgeable than I am.
I'm surprised as I think we all are, with the incredibly fast rise of AI
after decades of slow process, going all the way back to Terry Winograd's
blockworld and Capek's Robots etc. etc., as well as "mechanical"
chessplayers etc. earlier than that. Long history. And we're also aware
today of spy satellites, the computerization of war and war strategies,
fake news generation, and so forth. It will be interesting to see how all
of this plays out in the running up to the U.S. elections, where I think
bad actors will be in full force.
Finally when I started teaching internet (offered a course at Film-Video
Arts in NY and the New School as well), one of the first things I did was
take students to the most advanced site I could find on the very early WWW
- it was a neo-nazi site, perfectly laid out, clickable to English or
German text, incredible graphics, etc. (" Created by former Alabama Klan
boss and long-time white supremacist Don Black in 1995, *Stormfront* was
the first major hate site on the Internet." Wikipedia) "They" were on to
it, before most anyone, and it was frightening. And "they" and others are
still onto it today -
But then I'm a self-doubting pessimist, and your post is informative in a
way that my looming concern isn't - I hope I'm wrong here in every
conceivable way.
Thank you so much, Alan
On Sun, Jun 4, 2023 at 10:26 AM Edward Picot <[email protected]> wrote:
> Hi Alan,
>
> I work in a doctor's surgery where AI is very much seen as a potential
> solution to the problem of overwhelming workload. The doctors are always
> struggling to find time to process all the incoming results (blood
> tests, x-ray results, urine results, dexa scan results etc). If a result
> comes back showing, say, that the patient has a low sodium level, this
> could possibly be caused or exacerbated by one of the medications the
> patient is on; or maybe by ae wa course of treatment the patient is
> having at the hospital; or maybe by a medical condition the patient has
> which we already know about. The doctor has to go rooting around in the
> patient's notes to find out if any of this applies, and it be a very
> time-consuming process. At the moment our computer system tends to flash
> up warnings (Low sodium can be caused by loop diuretics!) without
> telling you whether this warning applies to this particular patient, and
> if so which of their medications it applies to, and what the best
> medical advice would be (stop it, reduce the dose, change to a different
> diuretic...). But the potential is for AI to do all of those things,
> probably within the next five years or so, saving an enormous amount of
> human time and potentially human error.
>
> In the hospital environment, AI has already been shown to be more
> efficient that humans at processing things like cytology results and
> mammogram results. It makes fewer errors, it processes the information
> faster, and of course it can keep doing it round the clock and seven
> days a week, without needing a break.
>
> It's also the case that when doctors are dealing with patients face to
> face it's becoming more and more difficult to remember and apply the
> latest guidelines - in the UK we're supposed to operate in accordance
> with NICE guidelines, NICE standing for National Institute for Clinical
> Excellence. Even something comparatively simple, like prescribing the
> right medication for a diabetic, or prescribing the right combination of
> inhalers for somebody with Chronic Obstructive Pulmonary Disease, has
> got much more complicated in the last couple of years with the
> introduction of new medication such as Semaglutide/Luraglutide (for
> diabetics) and LABA/LAMA inhalers for COPD. So you have to go and look
> up the guidance, or try to find a nice convenient summary of it, because
> it's really difficult to remember and changing all the time. Again, AI
> could be a big help if it were properly implemented.
>
> This isn't to say that doctors aren't worried about their status and
> expertise being eroded. They are. But I think most of them view an
> ever-increasing implementation of AI in healthcare settings as
> inevitable, and as long as it's implemented in a sympathetic and
> user-friendly way I think it will generally be welcomed.
>
> All of this raises bigger questions, of course - about humans and human
> interaction being sidelined for the sake of labour-saving and
> cost-saving AI implementation. I think we do have to be very careful
> about how we introduce this stuff, but I also think that we probably
> won't be. If the Department of Health can see ways to increase
> efficiency and reduce the wages bill in the NHS by bringing in more AI
> then that's what it will do, and if that means a lot of lab technicians
> and doctors losing their jobs and a lot of elderly patients finding it
> increasingly difficult to get face to face appointments with an actual
> real life doctor in an actual surgery, too bad.
>
> On this and the even bigger questions about whether AI itself is going
> to start making decisions which sideline or eliminate humans in the
> interests of achieving the goals it has been set, I think we urgently
> need an ethical framework to be built into the AI industry and into AI
> itself. Isaac Asimov suggested the three laws of robotics all the way
> back in the 1940s, and that seems to be the kind of thing that's
> required. Whether it's possible to comply the whole industry to comply
> with those kind of standards is open to doubt, but there are precedents
> - in genetic engineering, in the use of propellants that damage the
> ozone layer, and in the outlawing of DDT, for example. So maybe it can
> be done.
>
> Edward
>
>
> On 6/2/23 9:07 PM, Alan Sondheim via NetBehaviour wrote:
> >
> >
> > Hi -
> >
> > I very rarely forward anything, but this is seriously prescient and I
> > think portends an AI becoming increasingly ominous in our planetary
> > future.
> >
> > Meanwhile AI seems more and more prevalent in the arts; it's worrisome.
> >
> > I've used it myself but keep returning to the concept of testimony vs
> > testimony; whatever AI is, it might know where the bodies are, but it
> > doesn't know what a body is, albeit descriptions increasingly perfected.
> >
> > Just wonder what everyone here thinks about this.
> >
> > Thanks, Alan
> >
> >
> https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test?CMP=Share_AndroidApp_Other
> >
> >
> >
> >
> > _______________________________________________
> > NetBehaviour mailing list
> > [email protected]
> > https://lists.netbehaviour.org/mailman/listinfo/netbehaviour
> _______________________________________________
> NetBehaviour mailing list
> [email protected]
> https://lists.netbehaviour.org/mailman/listinfo/netbehaviour
>
--
*=====================================================*
*directory http://www.alansondheim.org <http://www.alansondheim.org> tel
347-383-8552**email sondheim ut panix.com <http://panix.com>, sondheim ut
gmail.com <http://gmail.com>*
*=====================================================*
_______________________________________________
NetBehaviour mailing list
[email protected]
https://lists.netbehaviour.org/mailman/listinfo/netbehaviour