Shane Curcuru wrote:
>While I hate to invent partial solutions that will need re-work later,
>I also want to make sure we are focusing on the same goal.  Ensuring
>that the xml-xalan project is of high quality and is very standards
>compliant is a goal I think we all agree on.  If we're *also* thinking
>of building Apache-sponsored donations to OASIS or the like for generic
>detailed conformance testing - both tests and potentially automation
>methods - then we should be explicit about that and get buy-in to the
>larger goals as well.

I think that the conformance tree is already suitably organized for use
in the OASIS conformance testing system. But we need to kick out some
tests that are not true conformance tests. These tests migrate to the
accept cluster, which everyone seems to like. The problem is how to put
the gold/silver reference outputs in position for comparison. If we do
nothing to the automation, then we get approach (1), relying on CVS and
introducing the weaknesses you pointed out earlier. So the question is
whether (and who and when) to do some extra work in ant so that you can
do a simple check-out from CVS. (And if we change to a different Code
Management System, you do a simple check-out from that.) There would be
further detail decisions about the role of the naming scheme.

To respond to Ilene's question about the blended naming approach in this
context: the easiest way is to have separate trees, and run automated
tests with a parameter designating the tree (by naming the highest-level
directory whose name reflects the differences). To move files, rename
files, construct filenames on the fly, combine files from different
schemes (to have fallbacks), or any other proposal to blend names, is
more work for developing test automation. If it's "worth it", fine.

The only way a person invoking a smoke test gets to specify fewer
parameters is when we agree to overlook serialization differences.
Behind the scenes, that means choosing a conformance-style comparitor
that covers up entity and CDATA differences. I think the standard
smoke test will require specification of parameters for processor,
entity policies, language-awareness policies, and maybe more, and that
will hold true regardless of which approach from the previous
paragraph is chosen.
.................David Marston

Reply via email to