On Sun, 2003-06-08 at 08:56, Tim Cook wrote:
> On Sat, 2003-06-07 at 17:27, Tim Churches wrote:
> 
> > Alas, Dave is correct. ZODB is fine but its scalability is not that
> ...
> >  or at least in the middleware. Dave will
> > argue that that is what the OMG HDTF corbaMED RAD (resource access
> > decision) service specification is for - and he is right:
> 
> 100% agreement there.
> 
> 
> > Yup. Individual clinic systems should not be physically located in
> > individual clinics - they should be hosted in large data centres
> > designed to run 24x365. But that doesn't mean that the data stored in
> > the hosted individual clinic systems can't be under the sole control of
> > those individual clinics.
> 
> I'm interested in the implementation plan for this concept.
> Especially when related to your other post about privacy concerns and
> database administrators. 
> 
> Doesn't consolidating multiple clinical systems in one location put more
> records at risk of theft at one time?  

Not if they are all protected by clinical system-specific encryption. If
my clinical database is strongly encrypted, I will happily hand out
CD-ROMs of it on the street corner (well, slight hyperbole there, but
you know what I mean).

> 
> This is akin to the ASP model which increases the risk that my physician
> can't get to my record due to some service interruption outside of my
> geographical area.  If the power is out in my local clinic then getting
> to my record isn't the only problem we have.  If a backhoe cuts a fiber
> 100 miles away it'll be darn frustrating.  

Yes, but that has to be balanced against the risk that my physician
can't get at my record because his/her local server (which is a
bargain-basement machine because s/he can't afford proper server-class
hardware) has failed. Which is more likely? I don't know, but in either
case, contingency plans need to be in place. With the
individual-clinic-system-storage-hosted-in-a-data-centre model, there
would still be a local server (probably an el-cheapo box running Linux)
to act as a read/write cache, and in the event of a failure of
communications to the data centre, as a write cache, so that patient
records can still be updated in some fashion. In the case of the
individual-clinic-system-storage-hosted-in-the-individual-clinic model,
when the local server fails, what do you do? Also, which is likely to be
rectified sooner: a server melt-down in a single physician's office, or
a comms failure which affects thousands of customers. (Having posed that
question, I am well aware of the 1998 Auckland disaster - in which the
main electricity cable supplying the Auckland central business district
failed, and took nearly two weeks to fix).

Perhaps it is a case of six of one, half a dozen of the other, but my
feeling is that after taking into account local physical security,
physician interest and competency at running and backing up servers, the
need for off-site back-up, disaster recovery etc etc, the data centre
model wins hands down. But I agree, the system must still be usable in
the event of a comms failure.

Tim C

> 
> Later,
> Tim
> 
-- 

Tim C

PGP/GnuPG Key 1024D/EAF993D0 available from keyservers everywhere
or at http://members.optushome.com.au/tchur/pubkey.asc
Key fingerprint = 8C22 BF76 33BA B3B5 1D5B  EB37 7891 46A9 EAF9 93D0

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to