On 10/08/12 22:21, Martin Sebor wrote:
If there are deficiencies/failures in the test that you plan
to work on fixing I would suggest doing that first, and making
other improvements only after the fixes have been verified.
I see no problem with removing some of the old Visual C++
cruft (e.g., workarounds for MSVC 6 bugs), but again, I'd
suggest to make these changes after fixing any bugs (unless
the workarounds themselves are causing the failures).

Good point. I read the incident as asking for a cleanup job, basically.

The workarounds were causing unnecessary failures, at least one was defective and/or applied when not needed. I have made another patch over the week-end that is more conservative: it corrects as opposed to eliminates the workarounds, and it preserves the libstd tests although they are unusable right now.

I don't think we need to open a separate issue for the current test failures indicative of problems in the library, e.g., __rw_strnxfrm embedded NUL defect. Do you see any problems with lumping that in 970?

Liviu


Martin

On 10/06/2012 02:54 PM, Liviu Nicoara wrote:
On 10/01/12 11:06, Martin Sebor wrote:
On 10/01/2012 06:57 AM, Liviu Nicoara wrote:

Also, I see that the localization tests do not make use of input files,
unlike the older Rogue Wave tests. Is that a policy going forward that
the tests do not make use of external input files?

The tests hardcode locale values in order to guarantee consistent
results, even if the external locale databases change. There also
is a makefile target that builds all the stdcxx locales. That's
just to exercise the locale utility programs. I think there also
should be a test that uses localedef to build a subset of these
locales, runs the locale utility to dump the contents of the built
database, and then localedef again to rebuild the database. Then
it compares the result of the first and second build (or it may
do three stages to normalize things) to make sure they match.

I have dusted the 22.locale.collate.cpp test file, removing old
workarounds, and tests for which we don't have the input anymore.

The important finding of this exercise is that the test fails in the
collation of wide strings with embedded NUL's. The wide facet
specialization uses wcscoll, if available, but does not take into
account embedded NULs, like the narrow specialization does.

I am attaching the full test and the diff (which is quite hard to read).
As is, the test has got a mere facelift with no substantive improvements.

Thanks,
Liviu


Reply via email to