On Wed, Jul 24, 2019 at 4:24 PM Andres Freund <and...@anarazel.de> wrote:
> as you can see the same xmax is set for both row version, with the new
> infomask being  HEAP_XMAX_KEYSHR_LOCK | HEAP_XMAX_LOCK_ONLY | HEAP_UPDATED.

Meta remark about your test case: I am a big fan of microbenchmarks
like this, which execute simple DML queries using a single connection,
and then consider if the on-disk state looks as good as expected, for
some value of "good". I had a lot of success with this approach while
developing the v12 work on nbtree, where I went to the trouble of
automating everything. The same test suite also helped with the nbtree
compression/deduplication patch just today.

I like to call these tests "wind tunnel tests". It's far from obvious
that you can take a totally synthetic, serial test, and use it to
measure something that is important to real workloads. It seems to
work well when there is a narrow, specific thing that you're
interested in. This is especially true when there is a real risk of
regressing performance in some way.

> but we really don't need to do any of that in this case - the only
> locker is the current backend, after all.
>
> I think this isn't great, because it'll later will cause unnecessary
> hint bit writes (although ones somewhat likely combined with setting
> XMIN_COMMITTED), and even extra work for freezing.
>
> Based on a quick look this wasn't the case before the finer grained
> tuple locking - which makes sense, there was no cases where locks would
> need to be carried forward.

I agree that this is unfortunate. Are you planning on working on it?

-- 
Peter Geoghegan


Reply via email to