Dan Creswell wrote:
On 28 February 2011 14:50, Patricia Shanahan <[email protected]> wrote:

Dennis Reedy wrote:

On Feb 28, 2011, at 1247AM, Patricia Shanahan wrote:

How would you propose handling a case like outrigger.FastList?

It is package access only, so changing its interface to the rest of
outrigger did not affect any public API. Several classes needed to be
changed to handle the interface change.

If I understand your question correctly, I think it should be fairly
straightforward. Following module conventions, we would have a structure
that would look (something) like:

outrigger/src/main/java/org/apache/river/outrigger
outrigger/src/test/java/org/apache/river/outrigger

The test (or benchmark) code would be in the same package, just in a
different directory. You would be able to accommodate your package access
only requirement.



I don't see how that answers the problem of a possible intra-package
interface change that needs to be benchmarked *before* the changes to the
rest of the package that would be needed to integrate the class under test
with the rest of what would be its package if it wins the benchmark.

If I had initially named my new FastList implementation
"com.sun.jini.outrigger.FastList" I could not have compiled outrigger in its
presence. It is not a drop-in replacement for the old FastList.

If it had turned out to be slower than the existing FastList I would still
have wanted to preserve it, and the relevant benchmark, because of the
possibility that future java.util.concurrent changes would make it better.
On the other hand, I would not have done the changes to the rest of
outrigger.



So I think we're coming down to the new FastList implementation having to be
called something else for benchmarking purposes to avoid conflict with old
FastList. Or the new implementation needs to be an inner class of the
benchmark and that could live in the same package as original FastList. Of
course, still packaging and source organisation concerns to conquer.


Giving the new FastList a different name was my original idea for dealing with this. Now that the discussion has become more active, it may be worth repeating how I started:

=============================================================

We will need several categories of benchmark code:

1. System level benchmarks. These benchmarks measure some public features, such as the outrigger JavaSpace implementation. For these, I think a similar structure to QA may be best. However, I need to understand how the QA harness links together clients and servers, and whether it has any special performance implications. We may need, for example, to add network delays to properly score implementations that involve different amounts of communication.

2. Internal benchmarks. These are more like unit tests, and need to mirror the main src package structure so that they can access non-public code.

3. Experimental code. In some situations it is useful to do run-offs between two or more implementations of the same class. We cannot have two classes with the same fully qualified name at the same time, so this type of test will need special copies of the classes with modified class names or package names. In addition to actually doing the tests and picking the implementation to go in the trunk, it is useful to keep discarded candidates around. One of them may turn out to be a better basis in a future performance campaign.

=============================================================

The FastList case is an example of the "Experiment code" category.

Patricia

Reply via email to