Tim Churches wrote:
> David Forslund wrote:
>   
>> Joseph Dal Molin wrote:
>>     
>>> Open source efforts/software like OpenMRS, WorldVistA (VistA Office 
>>> etc.), OSCAR etc. that are focused on diffusion/uptake and continuous 
>>> improvement. All need to have practical tools methods etc. to work 
>>> effectively in the heterogeneous health IT ecosystem. Building on Tim's 
>>> view:
>>>
>>>  >> I believe that with a modest upfront investment one can go a long way
>>>  >> toward interoperability.  The
>>>  >> open source community should be leading in this area, because of the
>>>  >> increased cooperation.
>>>
>>> What would that modest investment be? Who would be willing to 
>>> collaborate to make it happen? How does a practical approach dance 
>>> effectively with and benefit from the vision/work of the 
>>> "interoperability" expert community?  How can we leverage the OSHCA 
>>> meeting in May to help the open source health community take the 
>>> leadership role?
>>>
>>>
>>> Joseph
>>>   
>>>       
>> The above quote was from me, not Tim.  I don't know if he has the same 
>> view or not.
>>     
>
> I am not in any way antithetical to investing effort in
> interoperability. However, I do not regard it as an end in itself. The
> goal of open source health informatics must always be to improve the
> health and health care of people. If widespread and ongoing
> interoperability is important, in a given setting or sub-domain, to
> achieving those goals, then lots of effort should be put into
> implementing highly generalised, standards-based interoperability. If
> only limited intraoperability between, say, a few clinics all running
> the same software is required, then I believe it is perfectly
> permissible to take shortcuts and go for easier-to-implement
> non-standard interoperability mechanisms, particularly when software
> development resources are tight, as they almost always are in open
> source projects. And if interoperability is just not needed, then there
> is no point building it in. All these views are modified by the level of
> resources and the expected longevity of the software. If millions of
> dollars and tens or hundreds of person-years are being ploughed into a
> project, then it would be silly not to consider standards-based
> interoperability right from the start. But if, like most open source
> projects, the budget ranges from zero to a few hundred thousand dollars,
> and a few person-years of effort or less is involved, then a more zen
> approach can be taken - regard the software as ephemeral, to be evolved
> or recreated on a regular basis, perhaps even every year or so. In that
> case, the failure to build in complex, standards=based interoperability
> at the early stages is not such a disaster, even if it is needed later.
> Better to get the project up on its feet first.
>
>   
I don't think that interoperability is that costly to consider up 
front.  The design process that
even the smallest project can easily consider it.  It may well reject 
it, but the principles
of interoperability are important so that the cost of including it in 
the future can be anticipated. 
How one separates modules or components which can facilitate 
interoperability can also lower
the cost of development even for small open source projects.  I submit 
that exchanging and integrating
medical records is an important consideration even if not fully 
implemented at the moment.  The
cost later may require completely rewriting/replacing/converting the old 
system to a new one.
Interoperability certainly isn't the major driver, but should be at 
least considered up front. 
>   
>> The "modest
>> investment" is in the design of a system up front.  It always saves time 
>> to go through a design process
>> rather than just start coding.  The design process involves 
>> understanding and documenting the underlying
>> abstractions of the process.  This can lead to well-designed interfaces 
>> which properly divide up the labor involved
>> more efficient development.  It is at this point that one reviews the 
>> literature to see how well the interfaces
>> match to existing standards or systems.
>>     
>
> I agree with this to a degree, although I am utterly convinced that the
> traditional "waterfall" methods of designing everything on paper or as
> thought-experiments, encoding that in written specs, and then slavishly
> implementing those specs, is completely broken (yet I still see it used
> all the time for software projects, half of which then fail). Better to
> keep the initial design phase brief, then start coding and reviewing the
> outcome and design using highly iterative agile development methods.
>   
I'm not in favor of traditional "waterfall" methods either for the 
reasons you describe.  
As I've participated in the standards process, I've adopted some of the 
strategies learned
in my internal development. I create interfaces all the time for my 
internal code between
various modules, but I continuously update it for modification as I 
learn better what the proper
division of labor is.  Sometimes I "discover" the interface as I add new 
features. This evolution
certainly has been and should be done as one develops interoperability 
standards to make sure
they are useful and consist of a proper "division of labor".   I can say 
as an active participant
that this is exactly what was done in the OMG specs we worked on.  They 
went through many
iterations as people implemented and tested them to see that they were 
consistent, sensible, and
would work in a variety of situations.  Much of this knowledge is 
captured in the UML diagrams
that underly these specifications and which are probably the most 
enduring part of the work.

Developers who leverage that work, including use cases, sequence 
diagrams, etc. can save
a lot of development time, I believe.  I'm a strong proponent of the 
"XP" development
methodology, but still believe in good specifications to guide even a 
small project.  This
always saves time and results in better code, in my opinion.

Dave
> Tim C
>
>   

Reply via email to