"Jarkko Hietaniemi" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]

> You do know that ...
Yes.

If wctomb or mbtowc are to be used, then Perl's Unicode must be converted
either to the locale's wide char or to its multibyte. This isn't trivial,
but Mozilla solved this same problem. It can portably work. (Are you
listening Brian Stell!). It wasn't easy for them, but they did it.

> Imagine ...

I don't have to imagine. But I think that where a Perl script opens its
files is its own business. I don't see why Perl would have to do anything in
that regard. Even if it did, I don't see that feature as blocking the
simpler feature of just doing a conversion to/from multibyte before/after a
system call. If I'm dealing with just Japanese on a Japanese system, that's
all I need.

> Uhhh... from a Win32 API bug workaround you deduce that ... SJIS should
> work?


Here's my dilemma: utf-8 doesn't work as an argument to -d and neither does
Shift-JIS (at least with certain Shift-JIS characters). Those are my only
choices. So you are saying basically 'Shift-JIS be damned  - write a
module'? I hope you'll understand if I find it hard to sympathize with that
reasoning. But 'use the best tool for the job' - there are lots of other
tools out there that can process Shift-JIS just fine.

I'd like to suggest that you alter perlunicode a bit to point out the
limitations more explicitly. It will save some people a lot of time. "Perl
does not attempt to resolve the role of Unicode in this cases" is a
nice-sounding phrase, but "The following will not work correctly if you pass
utf-8 or multibyte characters regardless of any 'use encoding' statements"
would be a lot more clear.

I'll see if I can convince someone to add "utf8 and multibyte incompatible"
warnings to all the file-oriented functions and operators in perlfunc. That
will also help, I think. If that's the way it is, I think it is fair to make
sure everyone knows what the deal is.

Regards,

=ED


Reply via email to