I just committed another refactoring of the repo optimizers along with an
improved version of the JarDelta optimizer and some basic tests. We
should have well optimized repos now! (next step is to pack the jar
deltas!!)
In runnig the tests before committing there appear to be 11 of the
Director tests failing. I did not do anything in this area so assume that
they were failing before this new code came along? What is our policy on
failing tests? I've been commenting out my tests that fail until they can
be fixed. Others?
A number of tests use the TestMetadataRepository mechanism. This is cool
but unfortunately it leaves temp files down in Documents and Setting
(Windows) at a pretty high rate (~30 per full test run). There are a few
other files being left around. We should ensure that our tests run and
clean up after themselves. In the optimizer tests I've taken to naming
the files and dirs something related to the name of the test (e.g.
p2.optimizers.xxx) so that people encountering these files (leftover from
crashed test runs etc) know what they are. Do others think this to be a
good idea?
Related to this, we are all likely struggling to setup various repos etc
temporarily for tests and inevitiably using different approaches. Would
it be worth spending a bit of time creating some test repo infrastructure,
documenting this on the wiki (or wherever) and then making the tests
consistent? Most of the time I spend on this little project was in
managing all the test code and updating multiple copies. Taht is, until I
refactored to eliminate duplicate code. Now the test read well and are
very easy to create. Easy to create tests => more tests => better code...
Thoughts?
Jeff
_______________________________________________
equinox-dev mailing list
[email protected]
https://dev.eclipse.org/mailman/listinfo/equinox-dev