On 07/03/17 23:30, Erik Rijkers wrote: > On 2017-03-06 11:27, Petr Jelinek wrote: > >> 0001-Reserve-global-xmin-for-create-slot-snasphot-export.patch + >> 0002-Don-t-use-on-disk-snapshots-for-snapshot-export-in-l.patch+ >> 0003-Prevent-snapshot-builder-xmin-from-going-backwards.patch + >> 0004-Fix-xl_running_xacts-usage-in-snapshot-builder.patch + >> 0005-Skip-unnecessary-snapshot-builds.patch + >> 0001-Logical-replication-support-for-initial-data-copy-v6.patch > > I use three different machines (2 desktop, 1 server) to test logical > replication, and all three have now at least once failed to correctly > synchronise a pgbench session (amidst many succesful runs, of course) > > I attach an output-file from the test-program, with the 2 logfiles > (master+replica) of the failed run. The outputfile > (out_20170307_1613.txt) contains the output of 5 runs of > pgbench_derail2.sh. The first run failed, the next 4 were ok. > > But that's probably not very useful; perhaps is pg_waldump more useful? > From what moment, or leading up to what moment, or period, is a > pg_waldump(s) useful? I can run it from the script, repeatedly, and > only keep the dumped files when things go awry. Would that make sense?
Hi, yes waldump would be useful, the last segment should be enough, but possibly all segments mentioned in the log. The other useful thing would be to turn on log_connections and log_replication_commands. And finally if you could dump the contents of pg_subscription_rel, pg_replication_origin_status on subscriber and pg_replication_slots on publisher at the end of the failed run that would also help. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers