On 13.09.23 00:47, Jeff Davis wrote:
The idea is to have a new data type, say "UTEXT", that normalizes the
input so that it can have an improved notion of equality while still
using memcmp().

I think a new type like this would obviously be suboptimal because it's nonstandard and most people wouldn't use it.

I think a better direction here would be to work toward making nondeterministic collations usable on the global/database level and then encouraging users to use those.

It's also not clear which way the performance tradeoffs would fall.

Nondeterministic collations are obviously going to be slower, but by how much? People have accepted moving from C locale to "real" locales because they needed those semantics. Would it be any worse moving from real locales to "even realer" locales?

On the other hand, a utext type would either require a large set of its own functions and operators, or you would have to inject text-to-utext casts in places, which would also introduce overhead.


Reply via email to