Hi all,

On Sun, 14 Mar 2004 16:54:41 +0100, Harald Weyhing <[EMAIL PROTECTED]> wrote:
Lofi Dewanto wrote:

This is the link:
http://www.dstc.edu.au/pegamento/publications/index.html
This is stunning. I was reading the DSTC submission today and decided to check my mail, only to see Harald's message :-O

Also, the way of AndroMDA, PIM -> Sourcecode without a true PSM, is
a pragmatic way, especially if you don't have a good UML tool ;-)

I can't perfectly agree with this. For AndroMDA we actually model a PSM I would say. But actually that's not really important. Beeing able to transform between different models to get differnt levels of abstraction or separate different aspects of your software in different models could be very helpful. No question, we need tools that can make use of this but that's actually we are working for, right :).
Absolutely! A stack of metamodels with transformations in between stimulates reuse across cartridges. Try to build a cartridge for a performance pattern for JBoss and WebLogic without using a stepwise refinement approach... Moreover, at the lowest level of abstraction, having a metamodel that fully describes the component model (call it the PSM if you like) drastically lowers the complexity of your code templates (model to text transformations).

I prefer to have my code directly, instead of seeing UML diagrams
with e.g. EJBObject, EJBHome classes, etc. and have to work with them.
I can imagine that you see little added value in zooming from an "entity" model element to the standard EJB classes and descriptors. However, if you have a code generator that encapsulates a performance pattern for more than 1 application server, you (and other J2EE experts) may want to tune it and adapt it to new application servers. Being able to separate transaction and caching concerns from plain O/R mapping can help you in managing the complexity of your code generator. Moreover, don't limit the usefulness of models to visualizing them in diagrams. I am developing a code generator for generating the "write once, deploy n" pattern to WebLogic and JBoss (see http://www.win.ua.ac.be/~pvgorp/professional/phd/index.php#papers). Without a stepwise refinement approach (i.e. having only a PIM metamodel & code templates), my code generator would not have an in-memory representation of components that represent an EJB with caching attributes like "load strategy" (that occurs in JBoss and WebLogic, yet with different names and in different descriptor files). Therefore, I would have to duplicate the parts of the code generation process for every target application server.

Less (or none) transformation between models will surely save time during code generation. This will save a lot of time for developers of huge projects. But concerning thos PSMs, if you really model the PIM, I would think that you should never actualle *see* one of those PSMs as UML. They will be generated during code generation and when generation is finished they just disappear.
I don't know what you mean with "disappear". Yes, the diagrams of intermediate models don't have to be shown for every generated application. However, you do need to store the models (PIM, intermediate models and PSM) in a repository, together with all traceability links. Otherwise, your tool could not maintain consistency when the PIM or the code evolves. I'm also investigating how changes to the transformations (M2M or M2T) affect applications that were generated with the old version of the transformations.

One good thing with QVT would be to make transformation from PIM to
PIM, for example.

I am not sure what you mean by this kind of transformation, could you provide an example please?
I guess we're talking about PIM refactorings.

Concerning the CDI submission, I very much agree with their approach. I've met Keith Duddy from DSTC two weeks ago (see http://www.dagstuhl.de/04101/Talks/) and we observed that our approach to model transformation is very similar: we both believe that the consistency relationship between models should be specified declaratively, traceability information is essential for software evolution, etc. However, we also concluded that "reconciliation" code (code that corrects evolution conflicts) does not have to be declarative in all cases. Therefore, I was currently investigating to what extent I can use the CDI language for my "write once, deploy n" case study. I'm also looking at XDE's approach and the ATL approach. I'll keep the list updated.

Best regards,
Pieter Van Gorp.


------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ Andromda-user mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/andromda-user

Reply via email to