Patricia Shanahan wrote:
On 2/22/2011 12:16 AM, Peter Firmstone wrote:
Patricia Shanahan wrote:
I want to get going on some performance tuning, but believe it is best
guided and controlled by well-organized benchmarks. To that end, I
propose adding a place for benchmarks to the River structure.
We will need several categories of benchmark code:
1. System level benchmarks. These benchmarks measure some public
features, such as the outrigger JavaSpace implementation. For these, I
think a similar structure to QA may be best. However, I need to
understand how the QA harness links together clients and servers, and
whether it has any special performance implications. We may need, for
example, to add network delays to properly score implementations that
involve different amounts of communication.
2. Internal benchmarks. These are more like unit tests, and need to
mirror the main src package structure so that they can access
non-public code.
3. Experimental code. In some situations it is useful to do run-offs
between two or more implementations of the same class. We cannot have
two classes with the same fully qualified name at the same time, so
this type of test will need special copies of the classes with
modified class names or package names. In addition to actually doing
the tests and picking the implementation to go in the trunk, it is
useful to keep discarded candidates around. One of them may turn out
to be a better basis in a future performance campaign.
Thoughts? Alternatives? Comments?
Patricia
+1 to 1 and 2, not sure how to handle 3 - Peter.
I wonder if we could have a location for long term experimental code in
skunk?
If the experiment into a modular build is successful, (my apologies for
my recent lack of time), we could simply create an experimental module
and compare it against the original.
We won't always be able to integrate an experiment with its proper
package until after the experiment has been done.
For example, my recent FastList changes involved a change in how a
FastList user scans the list, from one based on list.head() and
node.next() to making FastList Iterable. I did not do the changes to
the rest of outrigger to compile with the new interface until after I
had assured myself that at least one Iterable implementation was as
fast as the old implementation.
I'm also dubious about doing performance comparisons with different
environments for the code being compared. My ideal is a program that
can cycle among implementations in a single run. Next best is a
program that measures a run-time selected implementation, but with
everything except the code under test unchanged. Everything involved
must be built with the same compiler version and parameters, so I
strongly prefer a single build.
I'm not sure how all these issues would be handled in the modular
build environment.
Something like this I think (Dennis, what are your thoughts?):
Each module would be considered a separate build, the tests would
include unit and integration tests. Other modules yours depends on are
like libraries, you wouldn't be concerned with their implementation.
The existing qa harness would be a separate library module (similar to
how junit or jtreg is a separate component). Jini Platform tests would
be separated from implementation tests. (We also need to consider how
to best utilise the discovery and join test kit.)
If the changes are localised to your module, meaning the public API
doesn't change, which is like an independent build, you could just
duplicate the original module, modify it, then run performance tests
against both the new and original modules, your new module name would
have a different version number to reflect the experimental changes.
If the changes to involve changing public API in the module, in a non
backward compatible manner, and this API is not part of net.jini.*
(which must remain backward compatible), then other modules that are
dependent can be migrated to the new module over time, proxy's that
utilise the new module will need to use preferred class loading to
ensure the correct classes are loaded. The module version number
would be incremented to show it is not backward compatible.
Building each module becomes a simpler process.
I'll reply again later, when I've got some more time.
Regards,
Peter.
Patricia