I think this approach sounds a little too brute force. Regardless of how the metadata is defined, once instances are created in memory, they will be exactly the same. Verifying this by repeating every possible test on the exact same DataObjects, just created differently, sounds a little inefficient. What I think we need instead is to have some set of consistency tests that confirm that the types created in various ways are in fact the same. The parameterized tests approach might be a good way to do that, but the tests that need to run to confirm this is a small subset of all the functionality of SDO. Testing every API N times is definitely overkill IMO.
Actually, it's probably sufficient to have a parameterized test that simply walks through the metadata and confirms the types and properties are as expected. All the DataObject tests do not need to be parameterized at all. I've noticed some overlap between the parameterized and non parameterized tests. It also looks like the parameterized tests make a lot of Tuscany-specific assumptions. I also wonder why ParameterizedTestUtil has it's own equals code (instead of just using EqualityHelper). Maybe we should just remove all these tests, and then resubmit/merge any unique tests with the appropriate non parameterized tests. One more thing, I noticed that the TestHelper is unnecessarily complicated. Instead of having all kinds of getXXXHelper() methods, it should just have one getHelperContext() method - that's the only method that is implementation dependant. Other methods, e.g., createPropertyDef() are also not implementation dependent, so they shouldn't be in the TestHelper interface. I think we should clean this up and simplify it now, before we have so many tests that we won't want to change it anymore. Thoughts? Frank. "Robbie Minshall" <[EMAIL PROTECTED]> wrote on 05/01/2007 12:06:03 PM: > I agree that the tests should be structured in a way that is spec and > functionaly orientated. I have never really liked the split between > paramatized and non paramatized tests so getting rid of this is just > fine. > > Other than that I think that the test cases are more or less organized > by API though I am sure some changes could be beneficial. > > The idea behind the paramatized tests does indeed lean towards > consistency. In general the SDO API should apply regardless of the > creation means for the DataObject ( static, dynamic, mixed, the old > relational DB DAS or any other datasource ). This is done simply by > injecting a dataObject instance into a common set of tests. > > However, I don't think that this should be packaged under a > consistency package - for me that has the same problems as being > organized under paramatized where you do not get a feel for complete > API coverage. > > If you want to get ride of that problem you should just have a single > source tree organized by API and have both paramatized and non > paramatized tests in that single tree. > > I would note that while slightly diluted ( moved more to an interface > to XML with the lack of work on the RDB DAS ) the intial conception of > SDO as a common API to many datasources shoudl still be maintained. > In my view this means that API tests etc should be performed on a > variety of dataobject creation mechanisms and paramatized tests are > the way to go. > > cheers, > Robbie. > > > > > > > > > On 5/1/07, kelvin goodson <[EMAIL PROTECTED]> wrote: > > Having spent some time getting to grips with the CTS there are some things I > > think I'd like to improve. > > > > First amongst them is to get some structure that allows us to get a feel for > > how well the spec is covered by the tests. One thing that concerns me is > > that one of the most apparent things in the structure is the split between > > the parameterized and the "single shot" junit tests. This seems like a > > junit technology driven split, and I don't think it is necessary or > > desirable. We should be able to apply the parameterization feature of junit > > without it being so prominent in the source code structure. > > > > I'd like to see more relation between spec features and test code packaging. > > That way we are more likely to spot gaps or overlaps. I feel surethat this > > will throw up some other issues, like testing certain features in > > combination. > > > > As a first step I'd like to propose refactoring the "paramatized" package. > > As far as I can see our usage of the junit parameterized testing function is > > aimed at ensuring consistency between operations performed on graphs when > > the metadata has been produced a) from an xsd and b) by using the SDO API to > > create it dynamically. I propose to rehouse these under > > test.sdo21.consistency. > > > > -- > > Kelvin. > > > > > -- > * * * Charlie * * * > Check out some pics of little Charlie at > http://www.flickr.com/photos/[EMAIL PROTECTED]/sets/ > > Check out Charlie's al crapo blog at http://robbieminshall.blogspot.com > > * * * Addresss * * * > 1914 Overland Drive > Chapel Hill > NC 27517 > > * * * Number * * * > 919-225-1553 > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
