>> According to RFC 2279, the Euro,
>> Unicode code point 0x20AC = 0010 0000 1010 1100,
>> will be encoded to 1110 0010 1000 0010 1010 1100 = 0xE282AC.
>> IMHO this is the only good and intuitive way for CHR() and ASCII().
> It is beyond ludicrous for functions like chr() or ascii() to
> convert a Euro sign to 0xE282AC rather than 0x20AC. "Intuitive"? There
> is _NO SUCH THING_ as 0xE282AC as a representation of a Unicode
> - there is either the code point, 0x20AC (which is a _number_), or the
> sequences of _bytes_ that represent that code point in various
> of which the three-byte sequence 0xE2 0x82 0xAC is the one used in
Yes, 0xE2 0x82 0xAC is the representation in UTF-8, and UTF-8 is the
database encoding in use.
> Functions like chr() and ascii() should be dealing with the _number_
> code point, not with its representation in transfer encodings.
I think that we have a fundamental difference.
As far as I know, the word "code point" is only used in UNICODE and
is the first column in the list
So, if I understand you correctly, you want CHR() and ASCII()
to convert between characters (in the current database encoding)
and UNICODE code points (independent of database encoding).
What I suggest (and what Oracle implements, and isn't CHR() and ASCII()
partly for Oracle compatibility?) is that CHR() and ASCII()
convert between a character (in database encoding) and
that database encoding in numeric form.
I think that what you suggest would be a useful function too,
but I certainly wouldn't call such a function ASCII() :^)
The current implementation seems closer to my idea of ASCII(),
test=> select to_hex(ascii('EUR'));
What do others think? Should the argument to CHR() be a Unicode
code point or the numeric representation of the database encoding?
---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at