Re: [GENERAL] Maximum number of exclusive locks

2016-09-13 Thread Tom Lane
Jeff Janes  writes:
> On Tue, Sep 13, 2016 at 6:21 AM, Tom Lane  wrote:
>> Having said that, the amount of slop involved is only enough for a
>> few hundred lock entries.  Not sure how you're managing to get to
>> nearly 2 extra entries.

> The code assumes every locked object will have 2 processes that hold it (or
> wait for it).  If you actually only have one holder for each locked object,
> that frees up a lot of memory to hold more locked objects.

Ah, right, the key is the difference between the init and max table sizes
in InitLocks.  So his test case is chewing up all the spare space but more
of it is going into the LOCK table than the PROCLOCK table, compared to
the estimates.  Possibly would leave him a bit starved for PROCLOCKs later.
I'm disinclined to change that initialization code though, since the point
is to try to adapt to variations in the holders-per-lock ratio for
different workloads.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Maximum number of exclusive locks

2016-09-13 Thread Jeff Janes
On Tue, Sep 13, 2016 at 6:21 AM, Tom Lane  wrote:

> "Daniel Verite"  writes:
> > Nothing to complain about, but why would the above formula
> > underestimate the number of object locks actually available
> > to a transaction? Isn't it supposed to be a hard cap for such
> > locks?
>
> No, it's a minimum not a maximum.  There's (intentionally) a fair amount
> of slop in the initial shmem size request.  Once everything that's going
> to be allocated has been allocated during postmaster startup, the rest is
> available for growth of shared hash tables, which in practice means the
> lock table; there aren't any other shared structures that grow at runtime.
> So there's room for the lock table to grow a bit beyond its nominal
> capacity.
>
> Having said that, the amount of slop involved is only enough for a
> few hundred lock entries.  Not sure how you're managing to get to
> nearly 2 extra entries.
>
>
The code assumes every locked object will have 2 processes that hold it (or
wait for it).  If you actually only have one holder for each locked object,
that frees up a lot of memory to hold more locked objects.

Cheers,


Jeff


Re: [GENERAL] Maximum number of exclusive locks

2016-09-13 Thread Tom Lane
"Daniel Verite"  writes:
> Nothing to complain about, but why would the above formula
> underestimate the number of object locks actually available
> to a transaction? Isn't it supposed to be a hard cap for such
> locks?

No, it's a minimum not a maximum.  There's (intentionally) a fair amount
of slop in the initial shmem size request.  Once everything that's going
to be allocated has been allocated during postmaster startup, the rest is
available for growth of shared hash tables, which in practice means the
lock table; there aren't any other shared structures that grow at runtime.
So there's room for the lock table to grow a bit beyond its nominal
capacity.

Having said that, the amount of slop involved is only enough for a
few hundred lock entries.  Not sure how you're managing to get to
nearly 2 extra entries.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Maximum number of exclusive locks

2016-09-13 Thread Daniel Verite
  Hi,

When deleting large objects, an exclusive lock is grabbed on each
object individually. As a result, a transaction that does it en
masse can encounter this error:

 ERROR:  out of shared memory
 HINT:  You might need to increase max_locks_per_transaction.

I would expect the maximum number of lo_unlink() in the same
transaction to be capped at:
 max_locks_per_transaction * (max_connections + max_prepared_transactions)
per documentation:
https://www.postgresql.org/docs/current/static/runtime-config-locks.html

  "The shared lock table tracks locks on max_locks_per_transaction *
  (max_connections + max_prepared_transactions) objects (e.g., tables);
  hence, no more than this many distinct objects can be locked at any
  one time"

But in practice, on an otherwise unused 9.5 instance, I've noticed
that this query:

  select count(lo_unlink(oid)) from (select oid
from pg_largeobject_metadata limit :LIMIT) s;

with these settings:

  max_locks_per_transaction | 512 
  max_connections | 30
  max_prepared_transactions | 5

starts failing at LIMIT=37133, although I'd expect this to
happen, in the best case, at LIMIT=512*(30+5)=17920.

Nothing to complain about, but why would the above formula
underestimate the number of object locks actually available
to a transaction? Isn't it supposed to be a hard cap for such
locks?


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general