On Fri, May 4, 2012 at 9:25 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Robert Haas <robertmh...@gmail.com> writes:
>> In 9.1, we just did this:
>
>>                 if (locallock->proclock == NULL || locallock->lock == NULL)
>>                 {
>>                         /*
>>                          * We must've run out of shared memory while
>> trying to set up this
>>                          * lock.  Just forget the local entry.
>>                          */
>>                         Assert(locallock->nLocks == 0);
>>                         RemoveLocalLock(locallock);
>>                         continue;
>>                 }
>
>> ...and I just shoved the new logic into that stanza without thinking
>> hard enough about what order to do things in.
>
> Right.  The other thing that was bothering me about that was that it's
> not clear now how to tell a broken locallock entry (which is what this
> logic originally intended to clean up) from a fastpath one.  Perhaps
> it'd be a good idea to add a "valid" flag?

Well, I think nLocks == 0 should serve that purpose adequately.

> And while I'm wondering
> about such things, what happens when it's necessary to convert a
> fastpath lock to a regular one, but there's no room in shared memory
> for more lock objects?

Then you error out.  Of course, if the fast path mechanism didn't
exist at all, you would have started erroring out much sooner.  Now,
there is some rub here, because the mechanism isn't "fair": strong
lockers will error out instead of weak lockers, and in the worst case
where the lock table remains perpetually on the edge of overflowing,
strong lock requests could be fail repeatedly, essentially a DOS.
Originally, I thought that the patch should include some kind of
accounting mechanism to prevent that from happening, where we'd keep
track of the number of fast-path locks that were outstanding and make
sure to keep that many slots free in the main lock table, but Noah
talked me out of it, on theory that (1) it was very unlikely to occur
in practice and (2) if it did occur, then you probably need to bump up
max_locks_per_transaction anyway and (3) it amounted to forcing
failures in cases where that might not be strictly necessary, which is
usually not a great thing to do.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to