On 07/19/2012 03:24 PM, Tatsuo Ishii wrote:
BTW, I'm not stick with mule-internal encoding. What we need here is a
"super" encoding which could include any existing encodings without
information loss. For this purpose, I think we can even invent a new
encoding(maybe something like very first prposal of ISO/IEC
10646?). However, using UTF-8 for this purpose seems to be just a
disaster to me.

Good point re unified chars. That was always a bad idea, and that's just one of the issues it causes.

I think these difficult encodings are where logging to dedicated file per-database is useful.

I'm not convinced that a weird and uncommon encoding is the answer. I guess as an alternative for people for whom it's useful if it's low cost in terms of complexity/maintenance/etc...

--
Craig Ringer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to