Hello, all.

I'm dealing with an API that claims it doesn't support unicode characters with 
embedded nulls.
I'm trying to figure out how much of a liability this is.

What is my best plan of attack for discovering precisely which code points have 
embedded nulls
given a particular encoding?  Didn't find it in the maillist archive.
I've googled for quite a while with no luck.  

I'll want to do this for a few different versions of unicode and a few different 
encodings.
What if I write a program using some of the data files available at unicode.org?
Am I crazy (I'm new at this stuff) or am I getting warm?
Perhaps this data file: http://www.unicode.org/Public/UNIDATA/UnicodeData.txt ?

Algorithm:
INPUT: Name of unicode code point file
INPUT: Name of encoding (perhaps UTF-8)

Read code point from file.
Expand code point to encoded format for the given encoding.
Test all constituent bytes for 0x00.
Goto next code point from file.

Thanks in advance for any help,

--Erik O.



Reply via email to