I am wondering why libunicode does not implement
unicode_string_width (im looking at v0.4 which I
believe to be the latest version).
There is an implementation (for the BMP at least) at
http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c
by markus kuhn. With a little effort is could
be dropped into place. (using unicode_get_utf8)
an example is below; mk_wcwidth could use some
simplification, but its good enough.
Another gripe unicode_char_t should be a signed
long. unicode leaves 1 bit, which is great for
things such as error codes, etc (negative numbers).
Using an unsigned int is not the best idea, I think.
(int can be longer than long)
~~~~
int
unicode_string_width(const char *p)
{
int w, width = 0;
if(!p)
return -1;
while(*p)
{
unicode_char_t wc;
p = unicode_get_utf8(p,&wc);
if(!p)
return -1;
if ((w = mk_wcwidth(wc)) < 0)
return -1;
else
width += w;
}
return width;
}
--
Linux-UTF8: i18n of Linux on all levels
Archive: http://mail.nl.linux.org/linux-utf8/