On 2017-05-11 17:21:18 -0400, Tom Lane wrote:
> Peter Eisentraut <peter.eisentr...@2ndquadrant.com> writes:
> > I ran this script
> > CREATE SEQUENCE seq1;
> > DO LANGUAGE plpythonu $$
> > plan = plpy.prepare("SELECT nextval('seq1')")
> > for i in range(0, 10000000):
> > plpy.execute(plan)
> > $$;
> > and timed the "DO".
> It occurred to me that plpy.execute is going to run a subtransaction for
> each call, which makes this kind of a dubious test case, both because of
> the extra subtransaction overhead and because subtransaction lock
> ownership effects will possibly skew the results. So I tried to
> reproduce the test using plain plpgsql,
> CREATE SEQUENCE IF NOT EXISTS seq1;
> \timing on
> do $$
> declare x int;
> for i in 0 .. 10000000 loop
> x := nextval('seq1');
> end loop;
> On HEAD, building without cassert, I got a fairly reproducible result
> of about 10.6 seconds. I doubt my machine is 6X faster than yours,
> so this indicates that the subtransaction overhead is pretty real.
Isn't that pretty much the point? The whole open_share_lock()
optimization looks like it really only can make a difference with
> > I compared the stock releases with a patched version that replaces the
> > body of open_share_lock() with just
> > return relation_open(seq->relid, AccessShareLock);
> Hm. I don't think that's a sufficient code change, because if you do it
> like that then the lock remains held after nextval() returns.
Hm? That's not new, is it? We previously did a LockRelationOid(seq->relid) and
then relation_open(seq->relid, NoLock)?
> This means
> that (a) subsequent calls aren't hitting the shared lock manager at all,
> they're just incrementing a local lock counter, and whether it would be
> fastpath or not is irrelevant; and (b) this means that the semantic issues
> Andres is worried about remain in place, because we will hold the lock
> till transaction end.
My concern was aborted subtransactions, and those should release their
lock via resowner stuff at abort?
> I experimented with something similar, just replacing open_share_lock
> as above, and I got runtimes of just about 12 seconds, which surprised
> me a bit.
Yea, I suspect we'd need something like the current open_share_lock,
except with the resowner trickery removed.
> I'd have thought the locallock-already-exists code path would be
> faster than that.
It's a hash-table lookup, via dynahash no less, that's definitley going
to be more expensive than a single seq->lxid != thislxid...
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: