On 10/06/12 16:54, Liviu Nicoara wrote:

The important finding of this exercise is that the test fails in the
collation of wide strings with embedded NUL's. The wide facet
specialization uses wcscoll, if available, but does not take into
account embedded NULs, like the narrow specialization does.

I have spent some more time lately digging in the collate test hoping to improve it.

I have noticed that the narrow collate_byname specialization transform (4.2.x/.../src/collate.cpp:479, function __rw_strnxfrm), which I believed correct in my previous post, is most likely broken. The following test case fails for all locales other than C and POSIX, when using libc:

$ cat t.cpp; nice make t && ./t af_ZA.utf8 || echo failed
#include <iostream>
#include <locale>
#include <string>

main (int argc, char** argv)
    char const c [] = "a\0c";

    std::locale loc (argv [1]);

    const std::collate<char>& col =
        std::use_facet<std::collate<char> > (loc);

    std::string s = col.transform (c, c + sizeof c / sizeof *c - 1);

    for (size_t i = 0; i < s.size (); ++i) {
        if (0 == s [i])
            return 0;

    return 1;
make: `t' is up to date.

The test case shows that the narrow transform removes the embedded NULs from the input string when it shouldn't. I.e., the output should contain the embedded NULs in the exact positions in which they appear in the input string. Eliminating the NULs alters the results of the corresponding compare operations in the facet, when using libc.


Reply via email to