No, it is not that simple at all unless you decide to natively support
UTF-16LE (BMP only) UNICODE. Microsoft and all their major vendors support
Windows and NTFS, both of which only natively support UTF-16LE (BMP only)
UNICODE. Any other desired encoding requires on-the-fly translations, making
UTF-16LE (BMP only) UNICODE support in Windows easy, simple, and elegant, and
anything else very difficult.

Actually FYI, all text mappings should be generic, for example, _tchar instead
of char. That way when you set the compiler option for UNICODE support, it
would automatically map _tchar to wchar, otherwise it will map to char. It is
like the same thing you have to do for universal 32/64-bit support when you
use size_t and ptrdiff_t.

Changing the compiler to UNICODE support will also change all the relevant
windows API calls from *A to *W, which could be a problem if you don't
understand how UNICODE doesn't work in other ways. For example, I don't know
if any compiler will automatically maps calls in the crt.dll from *fopen to
*_wfopen when UNICODE support is enabled. If it did, you could transparently
open native Windows NTFS files without having to do custom language
translation decoding/encoding. That would make things so much easier by
eliminating the need for codepages (which for example fopen requires but t
_wfopen does not). Then programmers would never have to worry about ridiculous
things like if your customer can't change their codepage or doesn't know how
to or they are too afraid to do it. As it stands you have to live with the
fact that your application could be incompatible with many other people's
computers until they also selected the same exact codepage as you have (which
more often than not, is not likely to happen).

Because almost all IDE's and source code files are only written in ASCII or
ANSI, translating strings can't be optimized. Since Microsoft is dominant
everywhere in the world, almost all compilers/IDEs support Microsoft options
like prefixing a literal string with an "L" to translate the string into
UTF-16LE (BMP only) UNICODE. But again, this would not be necessary if IDE's
and source code were able to support UTF-16LE (BMP only) UNICODE instead of
ASCII only.

Regards,
Andrew


On 2018-09-12 at 2:42 AM, Antonio Scuri <antonio.sc...@gmail.com> wrote:
  I don't think is that simple.  


  The right Unicode support would have wchar* instead of char* in all function
calls, and specially in the Lua binding. We don't need to change the current
API, but at least we should have to add new functions where there is string as
a parameter to correctly support Unicode. 


  That's why I like UTF-8, so we can keep the same API, and change things only
at the system function call.


  Anyway this is not in our short term task list. But I think that eventually
will became necessary.


Best,
Scuri




Em qua, 12 de set de 2018 às 05:19, 云风 Cloud Wu <clou...@gmail.com> escreveu:

Antonio Scuri <antonio.sc...@gmail.com>于2018年9月10日周一 下午7:03写道:

  Hi 云履,


  In IUP, CD and IM we use fopen to read and write files most of the time,
with a few exceptions when there is a native API that loads or saves the file
for us.  


  IUP access files mainly for configuration files in LED, Lua or cfg
(IupConfig).


  CD writes metafiles, for instance.


  IM loads and saves image files in several different formats using different
APIs, but most of the time uses fopen too.


  So, actually they all have the same problem. 




Could you consider write a fopen wrapper (and use a macro define for
compatibility) for better unicode support ?


In windows, we can use _wfopen to solve encoding problems.


_______________________________________________
Iup-users mailing list
Iup-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iup-users
_______________________________________________
Iup-users mailing list
Iup-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iup-users

Reply via email to