I suspect, Michael, that this whole subject would be enjoyed by the 19th
century Luddites.

I repeat, it's a red herring designed to avoid the real problems.

Or, as is the newly popular phrase to describe US politicians, it's kicking
the can down the road!

Merry Christmas!

Harry

******************************
Henry George School of Los Angeles
Box 655  Tujunga  CA 91042
(818) 352-4141
******************************

-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Mike Spencer
Sent: Wednesday, December 22, 2010 11:52 AM
To: [email protected]
Subject: [Futurework] Re: A Robot Stole My Job: Automation in the Recession


Pitfalls of automation on the way to the singularity:

As scary as this scenario is, taken as a model for the larger trend, it's
heartwarming that the financial cohort was reduced to the role of gofers for
the people who actually *do* stuff and that at least *some* critical stuff
got done under the circumstances.

RISKS-LIST: Risks-Forum Digest  Monday 20 December 2010  Volume 26:Issue 25

Also at: http://catless.ncl.ac.uk/Risks/26.25.html


    Date: Tue, 14 Dec 2010 18:07:20 -0500
    From: "Robert L Wears, MD, MS" <[email protected]>
    Subject: Health information technology risks

    Since the ECRI Institute recently moved health IT to its 'top 10
    list' of hazardous healthcare technologies for 2011, I thought I
    would offer this case in point.

    Shortly before midnight on a Monday evening, a large urban
    academic medical center suffered a major IT system crash which
    disabled virtually all IT functionality for the entire campus and
    regional outpatient clinics.  The outage affected ADT, financial,
    medical records, laboratory ordering and reporting, imaging
    ordering and reporting, and pharmacy systems.  (Two
    semi-independent subsystems, EKG, and picture archiving, were
    still functional in a limited sense).  The outage persisted for 67
    hours, and forced the cancelation of all elective procedures on
    Wednesday and Thursday, involving 52 major procedures and numerous
    minor procedures (such as colonoscopies).  All ambulance traffic
    was diverted to other hospitals during the outage (estimated 70
    diversions).  There were substantial delays in obtaining
    laboratory and radiology results on existing inpatients, so
    despite the reduction in the numbers of incoming patients, it was
    difficult to clear out the hospital as physicians delayed
    discharges pending those results.  Not surprising to the readers
    of RISKS, the outage was due to a concatenation of small failures
    and long-standing but unapparent underlying latent conditions.
    The triggering event was a hardware failure in a critical network
    component.  This was repaired but required major servers to be
    manually restarted.  During restart, the servers halted and
    reported critical errors; it was then discovered that certain file
    permissions had been changed that prevented the clinical systems
    from rebooting, and operators from reverting to prior versions.
    (It should be noted that these systems had not been rebooted for
    over 26 months).  Ultimately it was found that these changes
    resulted from an attempt to install "high availability" failover
    capability two years prior.  The high availability project had
    been plagued with problems from the start, and eventually was
    halted prior to completion, but some changes that had been made
    were never completely rolled back, unknown to the system's
    managers.  These changes, in the presence of the network fault,
    had the effect of triggering an attempt to execute high
    availability failover processes that were nonexistent and thus led
    to the reboot failures.  Once this issue was discovered and
    corrected, clinical servers could be restarted.  The databases
    then underwent extensive integrity checks, and when these were
    satisfactory, services were resumed on Friday at 1600.
    Backloading the clinical and financial data accumulated during the
    outage took considerably longer than the downtime did.  There was
    no evidence this event was due to external agency, malware,
    hacking, etc.  Interestingly, no pre-existing data were lost
    during the crash and downtime.  A previous risk analysis had
    estimated direct costs for complete downtime at $56,000 per hour,
    so the total direct cost (not including lost revenue from canceled
    cases or diverted patients) is likely close to $4 million.  As far
    as is known, no patients were injured during this event.  The
    risks here are multiple, but a few salient point are worth
    emphasizing.  First, it was difficult initially for frontline
    workers to convince help desk personnel that the system was
    unavailable due to the partitioning of the network secondary to
    the initiating hardware failure.  Second, it was difficult to
    understand the nature of the failure or to uncover the ultimate
    cause of the event.  Third, the organization was slow in
    activating its own internal disaster plan - an incident management
    group was not convened until 1530 Tuesday, roughly 16 hours into
    the incident.  Fourth, the social element of the sociotechnical
    system that is a hospital was able to quickly reorganize in
    multiple ways and keep essential services operating in at least
    some fashion for the duration.  Many of these adaptations were
    made "on the fly"; one of the most interesting was rescheduling
    financial staff (who now had nothing to do, since no bills could
    be produced), using them as runners to move orders, materials, and
    results around the organization.  Fifth, as has been frequently
    noted in RISKS, maintenance played an important part in this
    failure.  The irony of the role of "high availability" resulting
    in unavailability is rich indeed.  Sixth, as Richard Cook has
    pointed out, a working system, even with known flaws, is a
    precious resource, so the reluctance to ever submit to a full
    restart over the course of two years, which included multiple
    large and small maintenance downtimes, is understandable, even
    though that might have identified problems like the undocumented
    permission and script changes at a time when they might have been
    more easily recognized and corrected.  As more and more care
    delivery organizations with little experience in managing
    clinical, as opposed to business, systems install more and more
    advanced, clinical HIT systems -- systems that have not been
    developed from a safety-critical computing viewpoint -- more
    frequent and potentially more consequential failures are likely.

    Robert L Wears, MD, MS University of Florida 1-904-244-4405
    (ass't) Also Imperial College London [email protected] +44
    (0)791 015 2219
_______________________________________________
Futurework mailing list
[email protected]
https://lists.uwaterloo.ca/mailman/listinfo/futurework


_______________________________________________
Futurework mailing list
[email protected]
https://lists.uwaterloo.ca/mailman/listinfo/futurework

Reply via email to