On Sat, Feb 13, 2016 at 10:10 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Feb 12, 2016 at 12:55 AM, Amit Kapila <amit.kapil...@gmail.com>
> > Very Good Catch.  I think if we want to address this we can detect
> > the non-group leader transactions that tries to update the different
> > CLOG page (different from group-leader) after acquiring
> > CLogControlLock and then mark these transactions such that
> > after waking they need to perform CLOG update via normal path.
> > Now this can decrease the latency of such transactions, but I
> I think you mean "increase".


> > think there will be only very few transactions if at-all there which
> > can face this condition, because most of the concurrent transactions
> > should be on same page, otherwise the idea of multiple-slots we
> > have tried upthread would have shown benefits.
> > Another idea could be that we update the comments indicating the
> > possibility of multiple Clog-page updates in same group on the basis
> > that such cases will be less and even if it happens, it won't effect the
> > transaction status update.
> I think either approach of those approaches could work, as long as the
> logic is correct and the comments are clear.  The important thing is
> that the code had better do something safe if this situation ever
> occurs, and the comments had better be clear that this is a possible
> situation so that someone modifying the code in the future doesn't
> think it's impossible, rely on it not happening, and consequently
> introduce a very-low-probability bug.

Okay, I have updated the comments for such a possibility and the
possible improvement, if we ever face such a situation.  I also
once again verified that even if group contains transaction status for
multiple pages, it works fine.

Performance data with attached patch is as below.

M/c configuration
RAM - 500GB
8 sockets, 64 cores(Hyperthreaded128 threads total)

Non-default parameters
max_connections = 300
checkpoint_timeout    =35min
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 256MB

Client_Count/Patch_Ver 1 64 128 256
HEAD(481725c0) 963 28145 28593 26447
Patch-1 938 28152 31703 29402

We can see 10~11% performance improvement as observed
previously.  You might see 0.02% performance difference with
patch as regression, but that is just a run-to-run variation.

Note - To take this performance data, I have to revert commit
ac1d7945 which is known issue in HEAD as reported here [1].

[1] -

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachment: group_update_clog_v5.patch
Description: Binary data

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to