Andrew po-jung Ho wrote:
> 
> On Fri, 15 Dec 2000 14:52:43   Wayne Wilson wrote:
> ...
> >source schema ==> reference schema (IDEF1X data model to be specific)
> >==> receiver schema.
> >
> ...
> >I don't see how to avoid the M to N problem without an
> >intermediate common representation and neither do Renner et. al.
> 
> Hi Wayne,
>   Although I can appreciate the benefits of having an intermediate common 
>representation, I just don't see how it is absolutely necessary.
>   Using an easier to understand example, if we wish to go from schema=Chinese to 
>schema=Japanese, why do we need to go through an intermediate schema=English?
> 
> [Chinese==>Japanese vs. Chinese==>English==>Japanese]
> 
If that's all you want to do, there is no reason.  But when you want to
go from Chinese to Zulu, Chinese to German, Chinese to Spanish, Chinese
to ...., it becomes a much easier task if a superset intermediary
exists: chinese to <generic language schema> and then into target
language.

  To give a similar example ( and one that shouldn't have needed an
intermediate in the first place ), look at HL7 in any major medical
center:  Nearly all of us have interface engines whose sole purpose is
to connect HL7 from System A to HL7 from System B.  Once again, as long
as we only had Systems A and B , we didn't much bother and we connected
them directly and fiddled with parameters to make them talk.  But, being
large, we have 100's of systems and got tired of fiddling all those one
to one connections.  It became much easier when we bought an interface
engine which acts a single target to adapt our connections to.  Now we
have a more tractable and linearly scalable solution: each new system
means one new adapter instead of a 100 new adapters.  Each new chinese
document needs one translation instead of dozens of translations.

 Now I submit the problem of connecting one vendors version of HL7 to
another vendors version of HL7 is much simpler than one vendors data
model to another vendors data model, which in turn is vastly simpler
than one person's viewpoint of data to another person's viewpoint of
data. 

 I would also submit that the nature of this problem is almost entirely
in the realm of the computer and hidden from plain sight.  When people
read material that is sufficiently rich in contextual references, or is
a highly technical language that they are immersed in, translations from
one person's viewpoint to another's take place automatically as you
read.  The problem a computer system might have in trying to retrieve or
process this document based on it's content are not apparent to the
human reader at all!  And, if the scale of the computer's data is small
enough, no problems seem to exist in processing and retrieving either.

Reply via email to