On Tue, Aug 9, 2016 at 3:01 AM, Sergei Nikulov <[email protected]> wrote: > 2016-08-09 12:49 GMT+03:00 Daniel Stenberg <[email protected]>: >> On Tue, 9 Aug 2016, Sergei Nikulov wrote: >> >>> All conversion is used to be done automatically by defining UNICODE and >>> _UNICODE for Windows. So my idea is simple - typedef some kind of CURL_CHAR >>> and use it instead of plain char. This typedef will be simple char on >>> Unix/Linux variants and TCHAR for Windows. >> >> >> I take it that's a take for using file names using unicode? >> >> What happens to existing applications if we'd ship a libcurl with that >> enabled, won't there be (a risk of) breakage? Ie ABI incompatibility? > > Theoretically, it should not for Linux/Unix, because typedef just type alias. > But practically depends on gcc compiler (not sure). > > For Windows it definitely breaks ABI depending on UNICODE define > set/not set during build. > But not sure if libcurl used from common places as on Linux/Unix
The UNICODE #def works for Win32 API calls, but it does not work for CRT calls, of which I believe there are plenty in the curl code. I'm willing to help out with the conversion effort. The best approach is not obvious. Normally on my Windows projects, I prefer to use wide strings (UTF-16) everywhere, since that is native to Windows. It makes sense if the non-Windows part of the curl code uses UTF-8, but then we would have to convert to UTF-16 every time we make an API or CRT call. Having multiple build configurations is also not ideal, since there are so many of them already. My own suggestion is to use JIT conversions. It's pretty much what the code is doing now anyway, since the OS does the conversions behind the scenes, just not from the correct code-page. - Henri ------------------------------------------------------------------- List admin: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
