On 19/09/10 16:48, Kevin Grittner wrote:
After tossing it around in my head for a bit, the only thing that I
see (so far) which might work is to maintain a *list* of
SERIALIZABLEXACT objects in memory rather than a using a hash table.
The recheck after releasing the shared lock and acquiring an
exclusive lock would then go through SerializableXidHash.  I think
that can work, although I'm not 100% sure that it's an improvement.

Yeah, also keep in mind that a linked list with only a few items is faster to scan through than sequentially scanning an almost empty hash table.

Putting that aside for now, we have one very serious problem with this algorithm:

While they [SIREAD locks] are associated with a transaction, they must survive
a successful COMMIT of that transaction, and remain until all overlapping
> transactions complete.

Long-running transactions are already nasty because they prevent VACUUM from cleaning up old tuple versions, but this escalates the problem to a whole new level. If you have one old transaction sitting idle, every transaction that follows consumes a little bit of shared memory, until that old transaction commits. Eventually you will run out of shared memory, and will not be able to start new transactions anymore.

Is there anything we can do about that? Just a thought, but could you somehow coalesce the information about multiple already-committed transactions to keep down the shared memory usage? For example, if you have this:

1. Transaction <slow> begins
2. 100 other transactions begin and commit

Could you somehow group together the 100 committed transactions and represent them with just one SERIALIZABLEXACT struct?

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to