Simon Berthiaume wrote:
>> Notice that text strings are always transferred as type "char*" even if the text representation is UTF-16.

This might force users to explicitely type cast some calls to function to avoir warnings. I would prefer UNICODE neutral functions that can take either one of them depending on the setting of a compilation #define (UNICODE). Create a function that takes char * and another that takes wchar_t * them encourage the use of a #defined symbol that would switch depending on context (see example below). It would allow people to call the functions in either way they want.

Example:

int sqlite3_open8(const char*, sqlite3**, const char**);
int sqlite3_open16(const wchar_t*, sqlite3**, const wchar_t**);
#ifdef UNICODE
#define sqlite3_open sqlite3_open16
#else
#define sqlite3_open sqlite3_open8
#endif



I'm told that wchar_t is 2 bytes on some systems and 4 bytes on others. Is it really acceptable to use wchar_t* as a UTF-16 string pointer?

Note that internally, sqlite3 will cast all UTF-16 strings to be of
type "unsigned char*".  So the type in the declaration doesn't really
matter. But it would be nice to avoid compiler warnings.  So what datatype
are most systems expecting to use for UTF-16 strings?  Who can provide
me with a list?  Or even a few examples?


-- D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565


--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to