Original message (sorry, don't have a copy to reply to :( ): 
http://www.postgresql.org/message-id/CAB7nPqThX=WvuGA1J-_CQ293dK3FmUivuYkNvHR0W5xjEc=o...@mail.gmail.com
Commitfest URL: https://commitfest.postgresql.org/action/patch_view?id=1579

Summary:
I'd like someone to chime in on the potential performance impact. Other than 
that, looks good modulo a few grammar fixes.

Patch applies cleanly and passes make check.

Searching for both SyncRepLock and sync_standby_priority I find no other code 
this patch needs to modify.

SyncRepGetSynchronousNode():
IMHO, the top comment should mention that if there are multiple standbys of the 
same priority it returns the first one.

This switches from using a single if() with multiple conditions &&'d together 
to a bunch of if() continue's. I don't know if those will perform the same, and AFAIK 
this is pretty performance critical.

<grammarZealot>In the comments, "Process" should be "Proceed".</grammarZealot>

The original search logic in SyncRepReleaseWaiters is correctly copied and 
negated.

pg_stat_get_wal_senders():
The original version only loops through walsnds[] one time; the new code loops 
twice, both times while holding SyncRepLock(shared). This will block 
transaction commit if syncrep is enabled. That's not great, but presumably this 
function isn't going to be called terribly often, so maybe it's OK. (On a side 
note, perhaps we should add a warning to the documentation for 
pg_stat_replication warning not to hammer it...)

<grammar>
+       /* Get first the priorities on each standby as long as we hold a lock */
should be
+       /* First get the priority of each standby as long as we hold a lock */
or even just
+       /* First get the priority of each standby */
--
Jim C. Nasby, Data Architect                       j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to