[ breaking this off to an actual new thread ] Chapman Flack <c...@anastigmatix.net> writes: > Is there any way to find out, from the catalogs or in any automatable way, > which types are implemented with a dependence on the database encoding > (or on some encoding)?
Nope. Base types are quite opaque; we don't have a lot of concepts of type properties. We do know which types respond to collations, which is at least adjacent to your question, but it's not the same. > In the alternate world, you would know that certain datatypes were > inherently encoding-oblivious (numbers, polygons, times, ...), certain > others are bound to the server encoding (text, varchar, name, ...), and > still others are bound to a known encoding other than the server encoding: > the ISO SQL NCHAR type (bound to an alternate configurable database > encoding), "char" (always SQL_ASCII), xml/json/jsonb (always with the full > Unicode repertoire, however they choose to represent it internally). Another related problem here is that a "name" in a shared catalog could be in some other encoding besides the current database's encoding. We have no way to track that nor perform the implied conversions. I don't have a solution for that one either, but we should include it in the discussion if we're trying to improve the situation. regards, tom lane