On 03/17/2017 07:19 AM, Kyotaro HORIGUCHI wrote:
At Mon, 13 Mar 2017 21:07:39 +0200, Heikki Linnakangas <hlinn...@iki.fi> wrote in
Hmm. A somewhat different approach might be more suitable for testing
across versions, though. We could modify the perl scripts slightly to
print out SQL statements that exercise every mapping. For every
supported conversion, the SQL script could:
1. create a database in the source encoding.
2. set client_encoding='<target encoding>'
3. SELECT a string that contains every character in the source
There are many encodings that can be client-encoding but cannot
I would like to use convert() function. It can be a large
PL/PgSQL function or a series of "SELECT convert(...)"s. The
latter is doable on-the-fly (by not generating/storing the whole
| -- Test for SJIS->UTF-8 conversion
| SELECT convert('\0000', 'SJIS', 'UTF-8'); -- results in error
| SELECT convert('\897e', 'SJIS', 'UTF-8');
You could then run those SQL statements against old and new server
version, and verify that you get the same results.
Including the result files in the repository will make this easy
but unacceptably bloats. Put mb/Unicode/README.sanity_check?
Yeah, a README with instructions on how to do sounds good. No need to
include the results in the repository, you can run the script against an
older version when you need something to compare with.
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: