The index row size limit reared its ugly head again.

My current use of PostgreSQL is to load structured data into it but
from sources I don't have control over, to support a wide range of
queries whose precise nature is not yet known to me.  (Is this called
a data warehouse?)

Anyway, what happens from time to time is that some data which has
been processed successfully in the past suddenly failed to load
because there happens to be a very long string in it.  I know how to
work around this, but it's still annoying when it happens, and the
workarounds may make it much, much harder to write efficient queries.

What it would it take to eliminate the B-tree index row size limit (or
rather, increase it to several hundred megabytes)?  I don't care about
performance for index-based lookups for overlong columns, I just want
to be able to load arbitrary data and index it.

Sent via pgsql-hackers mailing list (
To make changes to your subscription:

Reply via email to