After running the core regression tests with installcheck-parallel, the pg_locks view sometimes shows me apparently-orphaned SIReadLock entries. They accumulate over repeated test runs. Right now, for example, I see
regression=# select * from pg_locks; locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath ------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+-----------------+---------+---------- relation | 130144 | 12137 | | | | | | | | 3/7977 | 8924 | AccessShareLock | t | t virtualxid | | | | | 3/7977 | | | | | 3/7977 | 8924 | ExclusiveLock | t | t relation | 130144 | 136814 | | | | | | | | 22/536 | 8076 | SIReadLock | t | f relation | 111195 | 118048 | | | | | | | | 19/665 | 6738 | SIReadLock | t | f relation | 130144 | 134850 | | | | | | | | 12/3093 | 7984 | SIReadLock | t | f (5 rows) after having done a couple of installcheck iterations since starting the postmaster. The PIDs shown as holding those locks don't exist anymore, but digging in the postmaster log shows that they were session backends during the regression test runs. Furthermore, it seems like they usually were the ones running either the triggers or portals tests. I don't see this behavior in v11 (though maybe I just didn't run it long enough). In HEAD, a run adds one or two new entries more often than not. This is a pretty bad bug IMO --- quite aside from any ill effects of the entries themselves, the leak seems fast enough that it'd run a production installation out of locktable space before very long. I'd have to say that my first suspicion falls on bb16aba50 ... regards, tom lane