Yeah, I guess I'm going to have to get into the test data in order to disprove 
this.  I just don't see how it could be possible that we cannot load the 
appropriate data for a single test before and put the db back.  Whether or not 
this is feasible in the sense of timing on these particular tests is another 
matter.  The way it runs now, those other tests must be putting the data in the 
right state for someone to run the next test - which is tantamount to a data 
load.

David, please let me know whether this is just my ignorance on this particular 
data setup or if my assumptions above are incorrect.

Cheers,
Tim
--
Tim Ruppert
HotWax Media
http://www.hotwaxmedia.com

o:801.649.6594
f:801.649.6595

----- "David E Jones" <[email protected]> wrote:

> I'm still for running tests as a set for each suite.
> 
> If you disagree with me, take a look at some of the current test suite
>  
> XML files and explain to me how it makes sense, or is even possible, 
> 
> to run most of them with 100% independent tests. You can't even load 
> 
> or assert data if you run each test case independently...
> 
> -David
> 
> 
> On Mar 7, 2009, at 1:40 PM, Scott Gray wrote:
> 
> > I haven't worked on it for a few weeks but I do have some code that 
> 
> > can track changes on the GenericDelegator and then reverse them when
>  
> > requested.  At the moment it makes the test independent at the  
> > component level, mostly because the was the easiest place to do it. 
>  
> > I've tested it by exporting the data from a fresh install, running 
> 
> > the tests, exporting again and comparing the differences and at the 
> 
> > moment the only data that gets left behind is anything coming from 
> 
> > async service calls.
> >
> > I'll try and make some time for getting it to work at the test level
>  
> > over the next couple of days and then put a patch in jira for  
> > review.  Of course the problem with committing it is that a large  
> > percentage of the tests will fail because they depend on the tests 
> 
> > that came before them.
> >
> > Regards
> > Scott
> >
> > HotWax Media
> > http://www.hotwaxmedia.com
> > 801.657.2909
> >
> >
> > ----- Original Message -----
> > From: "Tim Ruppert" <[email protected]>
> > To: [email protected]
> > Sent: Saturday, March 7, 2009 1:13:26 PM GMT -07:00 US/Canada
> Mountain
> > Subject: Re: how to write a test case
> >
> > I've been a committer on a number of xxxUnit projects in the past  
> > and grew up as one of the people bringing the agile development  
> > processes to many different organizations, so I'd like to think that
>  
> > I'm pretty savvy on this stuff.  That being said, I've never been  
> > happy with the way the testing frameworks work in OFBiz - some  
> > because of my ignorance, but mostly because of the dependencies.   
> > I've built code in a test-driven environment and let me just say  
> > that we had few bugs that weren't caught, so when people added  
> > stuff, we knew just about each and every time when there were side 
> 
> > effects and were able to fix them quickly.
> >
> > What I'd like to see sometime soon is something that works like
> this:
> >
> > 1. Each test (note I did not say component or test suite or test  
> > group, I said test) is totally independent.
> >
> > 2. Each test utilizes entity engine XML files to load the  
> > appropriate data necessary for that test.
> > -- Sometimes this will mean loading the same or similar XML files a 
> 
> > few times.
> > -- That's ok :)
> >
> > 3. Each test puts the db back in exactly the same state as it was  
> > before the test.
> > -- I used to use DbUnit to do this in the past.
> > -- Did this for both WebTest tests (functional) and normal JUnit  
> > tests.
> > -- Worked like a charm.
> > -- This should be even easier for us because the Entity Engine can 
> 
> > keep track of all we do and roll it all back.
> > -- I know that Scott Gray has been working with this for a bit - and
>  
> > it would be a HUGE win IMHO.
> >
> > 4. Utilizing the Entity Engine for better testing.
> > -- This is alluded to in #3 above about the roll backs.
> > -- It would also be cool if it could keep track of all you and BUILD
>  
> > an entity engine XML file and save it if you like.
> > -- -- This should be super easy as well.
> > -- Then you could use these files you're generating in these tests 
> 
> > for future tests.
> >
> > Anyways, that's my wish list and something that if we start to get 
> 
> > into place, I think we can build TONS of new unit tests around the 
> 
> > existing work.  It will make each everyone's lives easier and the  
> > project even more viable long term.  Looking forward to feedback  
> > whenever you guys get a chance, but I really feel this is the way we
>  
> > should go.
> >
> > Cheers,
> > Tim
> > --
> > Tim Ruppert
> > HotWax Media
> > http://www.hotwaxmedia.com
> >
> > o:801.649.6594
> > f:801.649.6595
> >
> > ----- "Vikas Mayur" <[email protected]> wrote:
> >
> >> On Mar 7, 2009, at 2:01 AM, Adam Heath wrote:
> >>
> >>> Vikas Mayur wrote:
> >>>
> >>>>> How did it work?  I reverted back to 660193, the last patch for
> >>>>> OFBIZ-1790, and the accounting tests failed.
> >>>>>
> >>>>> If they worked in the past, I'd like to know when.  If so, then
> >> that
> >>>>> means something since then has caused them to break, and I will
> >> more
> >>>>> than gladly track that down.
> >>>>>
> >>>>> However, if they have never worked(which is what I'm strongly
> >>>>> suspecting), then I stand by my original assessment.
> >>>>>
> >>>> Do not know why it is not working for you and I have no
> >> idea/solution
> >>>> for this.
> >>>
> >>> If you run the test individually, and follow the instructions in
> >> the
> >>> file, it'll probably work.
> >>
> >> Yeah, I think so.
> >>
> >>>
> >>>
> >>> However, that's not how things are done.
> >>>
> >>> All tests are run together.  Every testdef/*.xml file that is in
> >> any
> >>> ofbiz-component.xml is run one after the other, with no chance
> for
> >> any
> >>> manual setup between each test.
> >>>
> >>> In this circumstance, they do not work, and never did work.  It
> is
> >>> this circumstance that an *automated* test case must work.
> >>
> >> I do not know what is the point here to discuss same thing again
> and
> >>
> >> again. I agree to your point of making test automated and lot of
> >> people have complaint about
> >> this in past but no one really come forward for the contribution.
> >>
> >> Its really useless point to discuss on that these things in the
> trunk
> >>
> >> are making you frustrated because they are not written properly so 
> 
> >> why
> >>
> >> not complain early in the process and not after a YEAR or so.
> Sorry
> >> man, no time to look back and why not fix them by yourself if you
> see
> >>
> >> issues.

Reply via email to