On Aug 15, 2006, at 6:39 AM, Meeraj Kunnumpurath wrote:

Rather hack the code you can just set the "tuscany.installDir" system
property
Thanks Jeremy, I did see the usage of tuscany.installDir. My question
was in the absence of the system property, does the runtime always need to resolve the extensions directory relative to the directory from which
the launcher jar was loaded. Can it do the same if the
MainLauncherBooter was loaded from an exploded directory rather than a
Jar?

The "tuscany.installDir" property was designed to support debug of a user's application code. A user typically does not have the source for Tuscany in their IDE or have all the individual jars on their classpath - they have a project with their code and have Tuscany installed somewhere. It's like if I have a web application - I don't have the source for Tomcat or WebSphere available and I don't import individual jars into a project.

What we're trying to do is extend that environment so that the "user" (us) has all of the guts of the runtime exposed so that they can debug part of it e.g. step through the core or debug an extension. If we just add them to the classpath (e.g. as dependencies in Maven or as libraries in an IDE) we distort things even further: 1) there's no installation to speak of - just a bunch of jars on the classpath. Components that rely on having an installation directory structure (such as the DSE) won't work. The property is a way around that but does not really solve the problem, because ... 2) the launcher isolates the application from the runtime by loading the runtime in a separate classloader. The jars for that classloader are found by scanning a directory in the installation directory (or specified by the property "tuscany.bootDir"). By placing these jars in the system classloader the isolation is broken. This is great for us debugging the runtime except for the subtle (or not so subtle) classloader problems it may cause - it's different, which means that code will not debug the same.

This is not a problem unique to Tuscany and is one that has been solved before. The one of those solutions we have chosen leverages on the capabilities of the IoC architecture we used for the runtime and for the SCA programming model in general. That solution is to have components clearly define the things that they are dependent on (the IoC contract) and then have a test framework set up those dependencies in order to exercise the component. Those dependencies need to be fairly granular - e.g. at the level of a simple interface not "the entire runtime"

If you do that you can partition your testing into two phases:
1) component testing, where some test harness sets up the dependencies for a component and then exercises the component in those contexts. 2) integration testing, where you already know from 1) how the component will behave, so you focus on making sure that the things that use your component set up the contexts it expects

The SCA programming model expects and supports users who write and test applications in this way. The spec has gone to a lot of effort to allow users to test their components without needing a running SCA environment. The use of an IoC architecture in the Java C&I model is specifically designed to enable that.

If I am implementing a component, the C&I model explicitly calls out the IoC contract - it clearly defines the Services, References and Properties that a component has. That is the context for component testing that can be set by a test harness. If my component is implemented in Java, that test harness can be something as simple as JUnit with EasyMock for the references.

Once you've tested your components, then they can be assembled into composites for integration testing. That is where you would ensure the bindings and policies set up are appropriate for the way in which you want to use the component.

We have chosen to use SCA to assemble the runtime and this means these same techniques can be used to debug runtime components as well. Extension components can be tested on their own with dependencies resolved by a test harness such as JUnit with EasyMock. You should be able to test all the codepaths in an extension this way - all it takes is writing some test cases. This is easily debuggable in an IDE, just like a user's application code would be.

Once you know the component works as expected, your extension can then be integration tested with a real live runtime. This will involve deploying application components that use it, either as implementations (for a container extension), or to talk to other applications (for a binding). This may involve deploying to another runtime e.g. to a web container so that inbound HTTP requests can be tested. There will be a lot of moving parts, but that is in the nature of integration tests.

Putting it simply, the more testing you do at the component level the easier integration testing will be. We have extensions out there with few if any component level tests. This means all testing (if any) and debugging is done at the integration level which means there are lot of moving parts to set up and get right even before starting to test your component.

Putting it another way, people writing extensions should write unit tests for their components - it's easier for them and easier for others.

--
Jeremy


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to