On 2017-07-03 14:32, Cantor, Scott wrote:
You might find https://issues.apache.org/jira/browse/XERCESC-2104 of
interest. This replaces sanityTest.pl with separate automake checks.
You can still run "make check", but it now shows you each individual
test being run and stores the logs in separate
On 2017-07-03 14:55, Cantor, Scott wrote:
Roger, is that separate fork updated with the master copy? It looks
like maybe it's missing the bug fixes I checked in Friday after you
let me know they were failing. That would certainly explain it.
The link issue was just a dangling reference to 3_1
Roger, is that separate fork updated with the master copy? It looks like maybe
it's missing the bug fixes I checked in Friday after you let me know they were
failing. That would certainly explain it.
The link issue was just a dangling reference to 3_1 in the ICU build from the
version change,
> You might find https://issues.apache.org/jira/browse/XERCESC-2104 of
> interest. This replaces sanityTest.pl with separate automake checks.
> You can still run "make check", but it now shows you each individual
> test being run and stores the logs in separate files. This makes it
> much
On 29/06/17 20:47, Cantor, Scott wrote:
On 6/29/17, 3:25 PM, "Roger Leigh" wrote:
It's "scripts/sanityTest.pl", a Perl script which runs all the tests,
concatenates their output, and then diffs it with the expected output.
It fails if the output differs or the tests fail
Ack, never mind, PEBKAC, they're running now.
-- Scott
On 6/29/17, 3:49 PM, "Cantor, Scott" wrote:
On 6/29/17, 3:46 PM, "Roger Leigh" wrote:
> Actually, just run "make check" which builds the tests and runs them foryou.
Not for me unfortunately.
On 6/29/17, 3:46 PM, "Roger Leigh" wrote:
> Actually, just run "make check" which builds the tests and runs them foryou.
Not for me unfortunately.
export
On 6/29/17, 3:25 PM, "Roger Leigh" wrote:
> It's "scripts/sanityTest.pl", a Perl script which runs all the tests,
> concatenates their output, and then diffs it with the expected output.
> It fails if the output differs or the tests fail prematurely.
Well, I've run that
On 29/06/17 20:25, Roger Leigh wrote:
On 29/06/17 20:17, Cantor, Scott wrote:
On 6/29/17, 3:02 PM, "Roger Leigh" wrote:
The recent trunk changes broke a few of the unit tests.
I don't understand how, other than the ones that are for some reason
depending on the
On 6/29/17, 3:31 PM, "Roger Leigh" wrote:
> This is because the unit test is comparing the tool help output line by
> line and it's simply due to an extra line being added to the help
> output. It's not a fault of the change, it's just that the test data
> needs
On 29/06/17 20:17, Cantor, Scott wrote:
On 6/29/17, 3:02 PM, "Roger Leigh" wrote:
The recent trunk changes broke a few of the unit tests.
I don't understand how, other than the ones that are for some reason depending
on the output of the parameter options for the
On 29/06/17 20:17, Cantor, Scott wrote:
On 6/29/17, 3:02 PM, "Roger Leigh" wrote:
The recent trunk changes broke a few of the unit tests.
I don't understand how, other than the ones that are for some reason depending
on the output of the parameter options for the
On 6/29/17, 3:02 PM, "Roger Leigh" wrote:
> The recent trunk changes broke a few of the unit tests.
I don't understand how, other than the ones that are for some reason depending
on the output of the parameter options for the DOMCount sample. That seems like
an odd test,
On 22/06/17 19:23, Cantor, Scott wrote:
I've ported essentially all code-related changes and a decent amount of the web
site changes from the 3.1 branch back up to trunk.
At least one of the original security fixes to the branch apparently caused a regression,
which I wasn't surprised by. I
14 matches
Mail list logo