>If the C compiler does NOT do some consistent normalization, then
>the actual IDENTIFIER that the linker tries to resolve will not
>match and the link will fail.

You are assuming that careless programmers have a mix of source
code that is not self-consistent. If the convention is to use NF-C
and the programmer generates a bunch of NF-D code which he tries
to link to NF-C code, then a most I would say give him a warning
from the compiler, then keep going. If the link fails, well, its
supposed to.



>To assume that some keyboard input method does the normalization
>is naive.  Lots of C programs are machine-generated (more and more
>in these days of UML -> WSDL -> whatever language tools).

Like I said, if you want to generate source compatible with
other people's source, just use the convention they do. The
compiler/toolset doesnt have to care about it, and its
better that it doesnt.


Now, on the other hand, I could imagine a tool which takes a
text file, and applies an "ANYUTF8 -> C/C++_NORM_FORM"
transformation on it, as a convienience for those with poor
input systems. But definitly not mandatory, because you may
just want to write a wacky program which has lots of non-normal
forms in in on purpose, and there is no need for the compiler
to contrive a failure.
--
Linux-UTF8:   i18n of Linux on all levels
Archive:      http://mail.nl.linux.org/linux-utf8/

Reply via email to