Doug, I wrote a couple short extension methods which convert from
Arabic to UTF-16 and back. The pattern goes like this:
- Get input from GUI
- Convert from known code page to utf-16, in this case that's
windows-1256
- Save the data into MV ... it's just text and doesn't get munged on
the wire
We process and store most of our info in utf-8 - this includes multiple
European languages, not currently Chinese tho but that should not be an
issue if it is encoded in utf-8. We also use uniobjects.net with this data
no problem.
Well I say no problem - you do have to make sure your lang
Also if you are running u2 on linux the iconv utility (not the u2 ICONV,
linux iconv) is essential when dealing with different character encodings.
But there are a few undocumented features on here called //TRANSLIT and
//IGNORE that it is worth googleing about.
-Original Message-
First up - Adobe PDF is a subset of the Adobe Postscript page description
language (PDL).
Some printer vendors have Postscript emulations. Most business class laser/LED
printers or multi-function devices (MFDs) today have intelligent page
description language detection built in and typically
I'll cast another vote for UTF-8. This mechanism for storing and
transmitting Unicode data is elegantly designed, and it should be usable
in almost any legacy system that allows 8-bit data. Take a look at its
byte value allocations:
* Hex-00 through hex-1F are standard control characters,
On 05/04/2013 23:37, Bob Rasmussen wrote:
* The codes for multi-value mark, etc., are not used.
iirc, if you use UV NLS (and presumably UD too) the mark characters have
UTF-8 values assigned. Don't have a clue what they are, though.
Cheers,
Wol
___
It's a critical point, and worth verifying. If someone will verify
what UV NLS does, that'd be great.
In the Unicode manual, it states that in UTF-8, no byte can have a value
higher than hex-F4.
On Sat, 6 Apr 2013, Anthonys Lists wrote:
On 05/04/2013 23:37, Bob Rasmussen wrote:
* The codes