On 01/31/2015 12:25 AM, Jim Nasby wrote:
[snip]
It's a bit more complex than that. First, toast isn't limited to bytea; it holds for ALL varlena fields in a table that are allowed to store externally. Second, the limit is actually per-table: every table gets it's own toast table, and each toast table is limited to 4B unique OIDs. Third, the OID counter is actually global, but the code should handle conflicts by trying to get another OID. See toast_save_datum(), which calls GetNewOidWithIndex().

Now, the reality is that GetNewOidWithIndex() is going to keep incrementing the global OID counter until it finds an OID that isn't in the toast table. That means that if you actually get anywhere close to using 4B OIDs you're going to become extremely unhappy with the performance of toasting new data.

Indeed ......

I don't think it would be horrifically hard to change the way toast OIDs are assigned (I'm thinking we'd basically switch to creating a sequence for every toast table), but I don't think anyone's ever tried to push toast hard enough to hit this kind of limit.

We did. The Billion Table Project, part2 (a.k.a. "when does Postgres' OID allocator become a bottleneck").... The allocator becomes essentially unusable at about 2.1B OIDs, where it performed very well at "quite empty"(< 100M objects) levels.

So yes, using one sequence per TOAST table should help.
Combined with the new SequenceAMs / sequence implementation being proposed (specifically: one file for all sequences in a certain tablespace) this should scale much better.


My 2c.


Regards,

    / J.L.




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to