On Thu, Sep 22, 2011 at 10:36 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: > Anyway, I won't stand in the way of the patch as long as it's modified > to limit the number of values considered for any one character position > to something reasonably small.
One thing I was thinking about is that it would be useful to have some metric for judging how well any given algorithm that we might pick here actually works. For example, if we were to try all possible three character strings in some encoding and run make_greater_string() on each one of them, we could then measure the failure percentage. Or if that's too many cases to crank through then we could limit it some way - but the point is, without some kind of test harness here, we have no way of measuring the trade-off between spending more CPU time and improving accuracy. Maybe you have a better feeling for what's reasonable there than I do, but I'm not prepared to take a stab in the dark without benefit of some real measurements. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers