I wanted to expand on our notion of KISS; that is you take the approach
of starting with what you think is the simplest set of tools you need to
get the job done, and then add things as needs arise.  We have been
doing this for one year and have a good handfull of applications using
this approach in production.  None of them are clincal applications and
all of them use data integration via backend relational database
mapping.

We are not trying to build a collection of higher level network services
such as CorbaMed provides, our applications are pretty much self
contained, they get whatever they need from the local execution
environment, the database and the authentication server.  This is pretty
much a 3-tier web application environment. 

To make this understandable here is a rough diagram:

Client (browser) ------  Servlet container (HTTP) -------- SQL database
                                               |----------
Authentication server

Where the dashed lines are the network.  This is basically 4 network
services, all other services remain within the servlet container. 
That's the main difference between this kind of environment and a full
blown distributed service environment.  I can get all my id and name
resolution from our database's, I am not trying to get a name and an
authorization from another Medical School as would be the case if we
were building a virtual student record across several schools.  Yes, we
do have information that comes from a variety of outside organizations,
but it comes with no real time needs and can be dealt with in periodic
'batch' style imports and exports.

SO what Dave and Thomas are talking about is just that, accessing, in
real time, information from multiple organizational entities.  These
entities don't do backend database integration in real time, so one
can't get all the information from a single system. You have to make
network calls, and to that efficiently, securely and robustly, you need
something like Corba services.

So I really believe that there two kinds of fundamental use case
scenarios among the folks on this list:

1) Those whose real time data needs are largely self contained.

2) Those whose real time data needs are spread out among multiple
systems.

This will lead to different system architectures.

Yes, I know that the needs of #1 can be met with the more complicated
architecture demanded by #2, but KISS or Occams razor or just plain lack
of resources and a striving for maximal efficiency probably means that
#1 will be built with a simpler architecture.

I don't see this as a 'fork' in the code base ala some rapidly moving
open source projects.  I see this as two different worlds, the forking,
if any, is yet to come!

Reply via email to