On 18 January 2016 at 14:12, Masamichi HOSODA <[email protected]> wrote: >> If I understand correctly, you are changing the category codes of the >> Unicode characters when writing out to an auxiliary file, but only for >> those Unicode characters that are defined. This leads the Unicode >> character to be written out as a UTF-8 sequence. For the regular >> output, the definitions given with \DeclareUnicodeCharacter are used >> instead of trying to get a glyph for the Unicode character from a >> font. If there's no definition given, then the character must be in >> the font.
>> Using the character definitions built in to texinfo.tex with >> \DeclareUnicodeCharacter may give less good results than using the >> glyphs from a proper Unicode font. > > Thank you for your comments. > I've updated the patch. > > I want the following. > UTF-8 auxiliary file. > Handling Unicode filename (image files and include files). > Handling Unicode PDF bookmark strings. Thanks for working on this. I've had a look at the most recent patch, which resolves the category code fixing problem. I see you are using native UTF-8 input throughout, but I can't see how this could support "@documentencoding ISO-8859-1" (or any other single-byte encoding). I think the things you mention above could be supported without using native UTF-8 support. I don't see the problem with Unicode filenames: files are named with a series of bytes; does this mean that XeTeX (or LuaTeX?) has problems accessing files with names which aren't in UTF-8? Are PDF bookmarks written out incorrectly also? It's useful to give a ChangeLog entry when posting patches to this list, because this gives a summary behind what was changed. One thing I wondered about was whether \DeclareUnicodeCharacterNativeAtU and \DeclareUnicodeCharacterNative needed to be separate macros.
