Wayne Wilson wrote:
> 
< once again, cheekily responding to his own posts :) >
> 
>   My point about scale is this:  With a small enough set of records that
> a human being can read them all, the 'cleansing' and 'transformation'
> takes place inside the human's conscious understanding.  It's only when
> your record set's get large enough (per an individual's retrieval set)
> that you demand on a daily basis a computer to assist with these acts
> and that you start to see problems.
>
Further reading of this thread convinces me that that the crux of
Andrew's argument is really scale. If you scale the record set down to
just what an individual clinican needs to use (which has been
demonsrated to not even be a single patients complete medical record)
then the amount of records is small enough that I suspect even using
'mediators' is not needed.  The mediators would best be used to perform
a kind of personal annotation to ease the conscious understanding of
what is being read, thus allowing the clinician to spend less time on
historical review of the record.

So what we would have then for medical record keeping are systems in
which aggregate functions (defined as being across many clinician's set
of patient's) are not used.  IF you will remember the discuss on
principles of confidentiality, this fit's nicely with a patient privacy
model that restricts access to actively involved clinician's, and
eschews personally identifiable aggregation.

  Aggregate information is still needed for public health and research
purposes, but can be supported with separate systems designed for that
purpose and taking feeds from the individually based record systems.

  The issues of large scale organization of data management (data
stewardship, etc.) would need to be thought through, as I doubt that
large scale organizational entities will dis appear in health care.

Reply via email to