There seems to be concensus on the need for testing, and various 
opinions for how to do it. As an example, I'll describe how the 
WebLogicSubTask can be tested. Anybody's comments are welcome:

-Add the test code here:
xdoclet/core/test/xdoclet/ejb/vendor/weblogic/
                                     |
                                     +-build.xml
                                     | +-ejbdoclet (test target)
                                     | +-wlsdeploy (test target)
                                     | 
                                     +-src/xdoclet/test/
                                                   |
                                                   +-CarEJB.java
                                                   +-WheelEJB.java
                                                   +-DriverEJB.java

In brief: Each xdoclet task impl should have a separate test directory 
structure with sample Java sources with @task:tags. Further, there 
should be a build script with targets that invoke XDoclet and possibly 
tool-specific validation. The test is successful if the component(s) 
can be generated, packaged, deployed and accessed without errors.

As Ara pointed out, it's hard to write test cases that cover all 
combinations, so the test writer will have to be "clever" and write 
cases (In my case, EJBs) that cover as much as possible. (For example, 
I'll try to cover all kinds of CMR relationships in my test EJBs).

By separating tests this way, people can run the tests if they have the 
appropriate environment. -If they don't, well, they can't. -But there 
will always be someone who can.

Cheers, Aslak.


_______________________________________________
Xdoclet-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/xdoclet-devel

Reply via email to