+1 to Todd Moreover, different architectural layers have different requirements even to the same data. Data transformation is must have feature that allows loose coupling between SW in different layers.
However, as I see, Todd's got into the trap I mentioned in my previous post - he has read the pattern as canonical scheme must be used as is, in full, everywhere. If we agree that standardised canonical scheme may be decomposed into (still standardised) sub-schemas, we can restore data transformation feature in the form of data sub-schema re-composition. Any thoughts? - Michael ________________________________ From: Todd Biske <[email protected]> To: "[email protected]" <[email protected]> Sent: Friday, May 29, 2009 7:24:38 PM Subject: Re: [service-orientated-architecture] Erl on Canonical Schema The one nitpick I have against this article is the notion that using a canonical model prevents the application of the data model transformation pattern. In reality, it's just pushing the transformation somewhere else. Think about it this way: both consumer and provider have some internal processing model. What the canonical model pattern does is force the transform from processing model to service/message model to the endpoints, rather than allowing the endpoints to push their models outward, leading to a bottleneck of transformations in the middle. I prefer having the transformation from processing model to messaging model be as close to the endpoint as possible. -tb Todd Biske http://www.biske. com/blog/ Sent from my iPhone On May 28, 2009, at 6:07 PM, Gervas Douglas <gervas.douglas@ gmail.com> wrote: <<Of all the patterns in the SOA design patterns catalog, there is perhaps no other as simple to understand yet as difficult to apply in practice as Canonical Schema. There are also few patterns that spark as much debate. In fact, that application potential of Canonical Schema can become one of the fundamental influential factors that determine the scope and complexion of a service inventory architecture. It all comes down to establishing baseline interoperability. The Canonical Schema pattern ensures that services are built with contracts capable of sharing business documents based on standardized data models (schemas). Unlike the well-known pattern Canonical Data Model (Hohpe, Woolf) which advocates that disparate applications be integrated to share data based on common data models, Canonical Schema requires that we build these common data models into our service contracts in advance. Hence, the successful application of this pattern almost always requires that we establish and consistently enforce design standards. But before we discuss the standardization of data models and all of the enjoyable things that come with trying to make this happen, let's first take a step back and describe what we mean by "baseline interoperability. " When services and service consumer programs interact, data is transmitted (usually in the form of messages) organized according to some structure and a set of rules. This structure and the associated rules constitute a formal representation (or model) of the data. When different services are designed with different data models representing the same type of data, then they will have a problem sharing this data because the data models are simply incompatible. To address this problem, a technique called data model transformation is applied whereby data model mapping logic is developed so that data exchanged by such services is dynamically converted at runtime from compliance with one data model to another. So successful has this technique been that a corresponding Data Model Transformation pattern was developed. However, with data model transformation comes consequences. And with the overuse of data model transformation comes real problems pertaining to architectural complexity, increased development effort, and runtime performance demands that can impact larger service compositions to such an extent that if you press your ear close enough to your middleware you can actually hear the churning and grinding of this extra runtime latency. These and other details and issues will be discussed separately during an upcoming series article dedicated to the Data Model Transformation pattern. What's important for us to understand for now is that the primary goal of applying Canonical Schema is for us to avoid having to apply Data Model Transformation. This brings us back to design standards and the scope of their application. Establishing canonical schemas as part of services delivered by different project teams at different times requires that each project team agrees to use the same pre-defined data models for common business documents. This may sound like a simple requirement but something simple is not always easy. Many organizations have historically struggled with the enforcement and governance of standardized data models – so much so that it has led to organizational power struggles, resentment of individuals at being "enforced", and technical difficulties with large-scale compliance and change management (of the data models). These are all reasons as to why the Canonical Schema pattern is very commonly applied together with Domain Inventory. Limiting the application, enforcement, and governance of standardized data models to the confines of a manageably sized service inventory dramatically increases the potential to successfully realize the full potential of this pattern. Canonical Schema epitomizes the transition from silo-based, integrated enterprises to service-orientation . It is a pattern that solves a big problem but asks in return that we make an equally big commitment to its on-going application. >> You can read this at: http://searchsoa. techtarget. com/tip/0, 289483,sid26_ gci1356943_ mem1,00.html Gervas #ygrp-mlmsg #ygrp-msg p a span.yshortcuts { font-family: Verdana; font-size: 10px; font-weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size: 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a { text-decoration: none; } div.attach-table { width: 400px; } --> l>
