<snip>
I also think it is quite doable, not a huge project. I would like to
avoid basing it on jdiff, because jdiff's approach is to use the javadoc
engine, which means access to the source for the jars being compared
plus a whole lot of complexity (the doclet API isn't trivial). I was
thinking more on the lines of Robert's suggestion - just dive into the
jars, use BCEL to do introspection (or is it reflection?) on the classes
in there.
AIUI introspection is for higher level concepts (like the bean naming conventions) whereas reflection is about the the low level mechanics. i'm not sure how BCEL fits in since it's better know for byte code engineering than reflection (but i'm not an expert and i'd be glad to be educated :)
The only drawback I see on this is that you don't get access to @deprecated information. But the simplicity of pointing the tool at two sets of jars (prev-version and curr-version) rather than two source trees seems worth it to me. Plus the performance difference should be significant.
i'd probably advocate decoupling the actual information gathering from the collation. IMHO we should probably start with reflection (not least because it's what i know best ;) for the gathering but leave the option open for a gathering engine based on xdoclet (or something like that) and one based on BCEL (if that's possible).
The name is an interesting issue: I see it as both a replacement for jdiff and a binary-compatibility reporter. Maybe JarComparer?
if creating (one day) a competitor for jdiff is your aim then maybe CompareJ might be a reasonable name for the project with products JarComparer for jars and (one day) SourceComparer for source.
Here's a rough outline as I see it: java JarComparer -oldversion foo-1.jar:bar-1.jar:baz-1.jar -newversion foo-2.jar:bar-2.jar:baz-2.jar -shared widgets.jar:parser.jar -style html
This generates a DOM representation of all the differences and of any binary incompatibilities found, then runs a stylesheet over it to generate a report. Standard stylesheets are provided to generate text, html or xml output, or a user can specify their own stylesheet.
i'd probably do for a POJO object model rather than directly to DOM since these tend to be easier to test and work with (plus there's usually a lower barrier to entry for newbie developers) but i agree that xml's the right choice for reporting. i'd probably choose a SAX pipeline for the output since this can easily transformed or DOM'd as required. your plans for the actual reporting sound good. your command line seems about right too.
It also needs to be packaged as an Ant task that returns success/failure
depending on user needs (eg fail if any binary incompatibilities found).
IMHO it's best to go for a manager POJO which can be wrapped in various ways to create ant tasks, jelly tags, maven plugins etc but yes, you're right that an ant task is essential...
so we'd probably be looking at:
1 a gathering phase that creates simple object models for the classes
2 an analysis phase that takes two models and calculates the differences
3 an result phase that takes the results and pushes them into a SAX pipeline
4 a reporting phase that uses stylesheets to create the reports
how does this sound?
I agree with Stephen that SourceForge seems a more natural home for this
tool than jakarta-commons, though I am willing to be persuaded. There's
certainly nothing to prevent us from using the APL 2.0.
community rules :)
if people are more comfortable with sourceforge (and two are so far, by my count) then that's cool with me.
do any (potential) volunteers feel that the sandbox would be better than the sourceforge?
- robert
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
