On Wed, Nov 25, 2015 at 11:00 AM, Michael Paquier <michael.paqu...@gmail.com
> wrote:

> On Wed, Nov 25, 2015 at 10:55 AM, Alvaro Herrera <alvhe...@2ndquadrant.com
> > wrote:
>
>> Michael Paquier wrote:
>> > On Wed, Nov 25, 2015 at 6:22 AM, Alvaro Herrera <
>> alvhe...@2ndquadrant.com>
>> > wrote:
>> >
>> > > Michael Paquier wrote:
>>
>> > > This looks great as a starting point.  I think we should make TestLib
>> > > depend on PostgresNode instead of the other way around.  I will have a
>> > > look at that (I realize this means messing with the existing tests).
>> >
>> > Makes sense. My thoughts following that is that we should keep a track
>> of
>> > the nodes started as an array which is part of TestLib, with PGHOST set
>> > once at startup using tempdir_short. That's surely an refactoring patch
>> > somewhat independent of the recovery test suite. I would not mind
>> writing
>> > something among those lines if needed.
>>
>> OK, please do.
>>
>> We can split this up in two patches: one introducing PostgresNode
>> (+ RecursiveCopy) together with the refactoring of existing test code,
>> and a subsequent one introducing RecoveryTest and the corresponding
>> subdir.  Sounds good?
>>
>
> Yeah, that matches my line of thoughts. Will do so.
>

The result of a couple of hours of hacking is attached:
- 0001 is the refactoring adding PostgresNode and RecursiveCopy. I have
also found that it is quite advantageous to move some of the routines that
are synonyms of system() and the stuff used for logging into another
low-level library that PostgresNode depends on, that I called TestBase in
this patch. This way, all the infrastructure depends on the same logging
management. Existing tests have been refactored to fit into the new code,
and this leads to a couple of simplifications particularly in pg_rewind
tests because there is no more need to have there routines for environment
cleanup and logging. I have done tests on OSX and Windows using it and
tests are passing. I have as well tested that ssl tests were working.
- 0002 adds the recovery tests with RecoveryTest.pm now located in
src/test/recovery.
Regards,
-- 
Michael
From 686a58ae7edabc4d48767115283d2390c24c305a Mon Sep 17 00:00:00 2001
From: Michael Paquier <michael@otacoo.com>
Date: Wed, 25 Nov 2015 21:39:31 +0900
Subject: [PATCH 1/2] Refactor TAP tests to use common node management system

All the existing TAP tests now use a new module called PostgresNode that
centralizes logging, backup and definitions of a Postgres node used in
the tests. Some low-level content of TestLib is split into a second
module called TestBase which contains routines used by PostgresNode for
logging and running commands.

This is in preparation for a more advanced facility dedicated at testing
recovery scenarios directly in core.
---
 src/bin/initdb/t/001_initdb.pl                 |   3 +-
 src/bin/pg_basebackup/t/010_pg_basebackup.pl   |  29 ++-
 src/bin/pg_controldata/t/001_pg_controldata.pl |  13 +-
 src/bin/pg_ctl/t/001_start_stop.pl             |   5 +-
 src/bin/pg_ctl/t/002_status.pl                 |  17 +-
 src/bin/pg_rewind/RewindTest.pm                | 122 +++++-------
 src/bin/pg_rewind/t/003_extrafiles.pl          |   4 +-
 src/bin/pg_rewind/t/004_pg_xlog_symlink.pl     |   3 +
 src/bin/scripts/t/010_clusterdb.pl             |  23 ++-
 src/bin/scripts/t/011_clusterdb_all.pl         |  13 +-
 src/bin/scripts/t/020_createdb.pl              |  12 +-
 src/bin/scripts/t/030_createlang.pl            |  19 +-
 src/bin/scripts/t/040_createuser.pl            |  20 +-
 src/bin/scripts/t/050_dropdb.pl                |  12 +-
 src/bin/scripts/t/060_droplang.pl              |  10 +-
 src/bin/scripts/t/070_dropuser.pl              |  10 +-
 src/bin/scripts/t/080_pg_isready.pl            |   8 +-
 src/bin/scripts/t/090_reindexdb.pl             |  19 +-
 src/bin/scripts/t/091_reindexdb_all.pl         |   9 +-
 src/bin/scripts/t/100_vacuumdb.pl              |  18 +-
 src/bin/scripts/t/101_vacuumdb_all.pl          |  10 +-
 src/bin/scripts/t/102_vacuumdb_stages.pl       |  12 +-
 src/test/perl/PostgresNode.pm                  | 233 +++++++++++++++++++++++
 src/test/perl/RecursiveCopy.pm                 |  42 ++++
 src/test/perl/TestBase.pm                      | 109 +++++++++++
 src/test/perl/TestLib.pm                       | 254 +++++++------------------
 src/test/ssl/ServerSetup.pm                    |  24 +--
 src/test/ssl/t/001_ssltests.pl                 |  32 ++--
 28 files changed, 702 insertions(+), 383 deletions(-)
 create mode 100644 src/test/perl/PostgresNode.pm
 create mode 100644 src/test/perl/RecursiveCopy.pm
 create mode 100644 src/test/perl/TestBase.pm

diff --git a/src/bin/initdb/t/001_initdb.pl b/src/bin/initdb/t/001_initdb.pl
index 299dcf5..3b5d7af 100644
--- a/src/bin/initdb/t/001_initdb.pl
+++ b/src/bin/initdb/t/001_initdb.pl
@@ -4,10 +4,11 @@
 
 use strict;
 use warnings;
+use TestBase;
 use TestLib;
 use Test::More tests => 14;
 
-my $tempdir = TestLib::tempdir;
+my $tempdir = TestBase::tempdir;
 my $xlogdir = "$tempdir/pgxlog";
 my $datadir = "$tempdir/data";
 
diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl b/src/bin/pg_basebackup/t/010_pg_basebackup.pl
index dc96bbf..65ed4de 100644
--- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl
+++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl
@@ -2,6 +2,8 @@ use strict;
 use warnings;
 use Cwd;
 use Config;
+use PostgresNode;
+use TestBase;
 use TestLib;
 use Test::More tests => 51;
 
@@ -9,8 +11,15 @@ program_help_ok('pg_basebackup');
 program_version_ok('pg_basebackup');
 program_options_handling_ok('pg_basebackup');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $tempdir = TestBase::tempdir;
+
+my $node = get_new_node();
+# Initialize node without replication settings
+$node->initNode(0);
+$node->startNode();
+my $pgdata = $node->getDataDir();
+
+$ENV{PGPORT} = $node->getPort();
 
 command_fails(['pg_basebackup'],
 	'pg_basebackup needs target directory specified');
@@ -26,19 +35,19 @@ if (open BADCHARS, ">>$tempdir/pgdata/FOO\xe0\xe0\xe0BAR")
 	close BADCHARS;
 }
 
-configure_hba_for_replication "$tempdir/pgdata";
-system_or_bail 'pg_ctl', '-D', "$tempdir/pgdata", 'reload';
+$node->setReplicationConf();
+system_or_bail 'pg_ctl', '-D', $pgdata, 'reload';
 
 command_fails(
 	[ 'pg_basebackup', '-D', "$tempdir/backup" ],
 	'pg_basebackup fails because of WAL configuration');
 
-open CONF, ">>$tempdir/pgdata/postgresql.conf";
+open CONF, ">>$pgdata/postgresql.conf";
 print CONF "max_replication_slots = 10\n";
 print CONF "max_wal_senders = 10\n";
 print CONF "wal_level = archive\n";
 close CONF;
-restart_test_server;
+$node->restartNode();
 
 command_ok([ 'pg_basebackup', '-D', "$tempdir/backup" ],
 	'pg_basebackup runs');
@@ -81,13 +90,13 @@ command_fails(
 
 # Tar format doesn't support filenames longer than 100 bytes.
 my $superlongname = "superlongname_" . ("x" x 100);
-my $superlongpath = "$tempdir/pgdata/$superlongname";
+my $superlongpath = "$pgdata/$superlongname";
 
 open FILE, ">$superlongpath" or die "unable to create file $superlongpath";
 close FILE;
 command_fails([ 'pg_basebackup', '-D', "$tempdir/tarbackup_l1", '-Ft' ],
 	'pg_basebackup tar with long name fails');
-unlink "$tempdir/pgdata/$superlongname";
+unlink "$pgdata/$superlongname";
 
 # The following tests test symlinks. Windows doesn't have symlinks, so
 # skip on Windows.
@@ -98,7 +107,7 @@ SKIP: {
 	# to our physical temp location.  That way we can use shorter names
 	# for the tablespace directories, which hopefully won't run afoul of
 	# the 99 character length limit.
-	my $shorter_tempdir = tempdir_short . "/tempdir";
+	my $shorter_tempdir = TestBase::tempdir_short . "/tempdir";
 	symlink "$tempdir", $shorter_tempdir;
 
 	mkdir "$tempdir/tblspc1";
@@ -120,7 +129,7 @@ SKIP: {
 			"-T$shorter_tempdir/tblspc1=$tempdir/tbackup/tblspc1" ],
 		'plain format with tablespaces succeeds with tablespace mapping');
 	ok(-d "$tempdir/tbackup/tblspc1", 'tablespace was relocated');
-	opendir(my $dh, "$tempdir/pgdata/pg_tblspc") or die;
+	opendir(my $dh, "$pgdata/pg_tblspc") or die;
 	ok( (   grep {
 		-l "$tempdir/backup1/pg_tblspc/$_"
 			and readlink "$tempdir/backup1/pg_tblspc/$_" eq
diff --git a/src/bin/pg_controldata/t/001_pg_controldata.pl b/src/bin/pg_controldata/t/001_pg_controldata.pl
index e2b0d42..343223b 100644
--- a/src/bin/pg_controldata/t/001_pg_controldata.pl
+++ b/src/bin/pg_controldata/t/001_pg_controldata.pl
@@ -1,16 +1,19 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 13;
 
-my $tempdir = TestLib::tempdir;
-
 program_help_ok('pg_controldata');
 program_version_ok('pg_controldata');
 program_options_handling_ok('pg_controldata');
 command_fails(['pg_controldata'], 'pg_controldata without arguments fails');
 command_fails([ 'pg_controldata', 'nonexistent' ],
-	'pg_controldata with nonexistent directory fails');
-standard_initdb "$tempdir/data";
-command_like([ 'pg_controldata', "$tempdir/data" ],
+			  'pg_controldata with nonexistent directory fails');
+
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
+
+command_like([ 'pg_controldata', $node->getDataDir() ],
 	qr/checkpoint/, 'pg_controldata produces output');
diff --git a/src/bin/pg_ctl/t/001_start_stop.pl b/src/bin/pg_ctl/t/001_start_stop.pl
index f57abce..d76fe80 100644
--- a/src/bin/pg_ctl/t/001_start_stop.pl
+++ b/src/bin/pg_ctl/t/001_start_stop.pl
@@ -1,11 +1,12 @@
 use strict;
 use warnings;
 use Config;
+use TestBase;
 use TestLib;
 use Test::More tests => 17;
 
-my $tempdir       = TestLib::tempdir;
-my $tempdir_short = TestLib::tempdir_short;
+my $tempdir       = TestBase::tempdir;
+my $tempdir_short = TestBase::tempdir_short;
 
 program_help_ok('pg_ctl');
 program_version_ok('pg_ctl');
diff --git a/src/bin/pg_ctl/t/002_status.pl b/src/bin/pg_ctl/t/002_status.pl
index 31f7c72..74cb68a 100644
--- a/src/bin/pg_ctl/t/002_status.pl
+++ b/src/bin/pg_ctl/t/002_status.pl
@@ -1,22 +1,25 @@
 use strict;
 use warnings;
+use PostgresNode;
+use TestBase;
 use TestLib;
 use Test::More tests => 3;
 
-my $tempdir       = TestLib::tempdir;
-my $tempdir_short = TestLib::tempdir_short;
+my $tempdir       = TestBase::tempdir;
+my $tempdir_short = TestBase::tempdir_short;
 
 command_exit_is([ 'pg_ctl', 'status', '-D', "$tempdir/nonexistent" ],
 	4, 'pg_ctl status with nonexistent directory');
 
-standard_initdb "$tempdir/data";
+my $node = get_new_node();
+$node->initNode();
 
-command_exit_is([ 'pg_ctl', 'status', '-D', "$tempdir/data" ],
+command_exit_is([ 'pg_ctl', 'status', '-D', $node->getDataDir() ],
 	3, 'pg_ctl status with server not running');
 
 system_or_bail 'pg_ctl', '-l', "$tempdir/logfile", '-D',
-  "$tempdir/data", '-w', 'start';
-command_exit_is([ 'pg_ctl', 'status', '-D', "$tempdir/data" ],
+  $node->getDataDir(), '-w', 'start';
+command_exit_is([ 'pg_ctl', 'status', '-D', $node->getDataDir() ],
 	0, 'pg_ctl status with server running');
 
-system_or_bail 'pg_ctl', 'stop', '-D', "$tempdir/data", '-m', 'fast';
+system_or_bail 'pg_ctl', 'stop', '-D', $node->getDataDir(), '-m', 'fast';
diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm
index a4c1737..5124df8 100644
--- a/src/bin/pg_rewind/RewindTest.pm
+++ b/src/bin/pg_rewind/RewindTest.pm
@@ -37,6 +37,8 @@ package RewindTest;
 use strict;
 use warnings;
 
+use PostgresNode;
+use TestBase;
 use TestLib;
 use Test::More;
 
@@ -47,10 +49,8 @@ use IPC::Run qw(run start);
 
 use Exporter 'import';
 our @EXPORT = qw(
-  $connstr_master
-  $connstr_standby
-  $test_master_datadir
-  $test_standby_datadir
+  $node_master
+  $node_standby
 
   append_to_file
   master_psql
@@ -66,15 +66,9 @@ our @EXPORT = qw(
   clean_rewind_test
 );
 
-our $test_master_datadir  = "$tmp_check/data_master";
-our $test_standby_datadir = "$tmp_check/data_standby";
-
-# Define non-conflicting ports for both nodes.
-my $port_master  = $ENV{PGPORT};
-my $port_standby = $port_master + 1;
-
-my $connstr_master  = "port=$port_master";
-my $connstr_standby = "port=$port_standby";
+# Ports of both nodes.
+our $node_master = undef;
+our $node_standby = undef;
 
 $ENV{PGDATABASE} = "postgres";
 
@@ -82,16 +76,16 @@ sub master_psql
 {
 	my $cmd = shift;
 
-	system_or_bail 'psql', '-q', '--no-psqlrc', '-d', $connstr_master,
-	  '-c', "$cmd";
+	system_or_bail 'psql', '-q', '--no-psqlrc', '-d',
+	  $node_master->getConnStr(), '-c', "$cmd";
 }
 
 sub standby_psql
 {
 	my $cmd = shift;
 
-	system_or_bail 'psql', '-q', '--no-psqlrc', '-d', $connstr_standby,
-	  '-c', "$cmd";
+	system_or_bail 'psql', '-q', '--no-psqlrc', '-d',
+      $node_standby->getConnStr(), '-c', "$cmd";
 }
 
 # Run a query against the master, and check that the output matches what's
@@ -104,7 +98,7 @@ sub check_query
 	# we want just the output, no formatting
 	my $result = run [
 		'psql',          '-q', '-A', '-t', '--no-psqlrc', '-d',
-		$connstr_master, '-c', $query ],
+		$node_master->getConnStr(), '-c', $query ],
 	  '>', \$stdout, '2>', \$stderr;
 
 	# We don't use ok() for the exit code and stderr, because we want this
@@ -169,12 +163,11 @@ sub append_to_file
 sub setup_cluster
 {
 	# Initialize master, data checksums are mandatory
-	rmtree($test_master_datadir);
-	standard_initdb($test_master_datadir);
+	$node_master = get_new_node();
+	$node_master->initNode();
 
 	# Custom parameters for master's postgresql.conf
-	append_to_file(
-		"$test_master_datadir/postgresql.conf", qq(
+	$node_master->appendConf("postgresql.conf", qq(
 wal_level = hot_standby
 max_wal_senders = 2
 wal_keep_segments = 20
@@ -185,17 +178,11 @@ hot_standby = on
 autovacuum = off
 max_connections = 10
 ));
-
-	# Accept replication connections on master
-	configure_hba_for_replication $test_master_datadir;
 }
 
 sub start_master
 {
-	system_or_bail('pg_ctl' , '-w',
-				   '-D' , $test_master_datadir,
-				   '-l',  "$log_path/master.log",
-				   "-o", "-p $port_master", 'start');
+	$node_master->startNode();
 
 	#### Now run the test-specific parts to initialize the master before setting
 	# up standby
@@ -203,24 +190,19 @@ sub start_master
 
 sub create_standby
 {
+	$node_standby = get_new_node();
+	$node_master->backupNode('my_backup');
+	$node_standby->initNodeFromBackup($node_master, 'my_backup');
+	my $connstr_master = $node_master->getConnStr();
 
-	# Set up standby with necessary parameter
-	rmtree $test_standby_datadir;
-
-	# Base backup is taken with xlog files included
-	system_or_bail('pg_basebackup', '-D', $test_standby_datadir,
-				   '-p', $port_master, '-x');
-	append_to_file(
-		"$test_standby_datadir/recovery.conf", qq(
+	$node_standby->appendConf("recovery.conf", qq(
 primary_conninfo='$connstr_master application_name=rewind_standby'
 standby_mode=on
 recovery_target_timeline='latest'
 ));
 
 	# Start standby
-	system_or_bail('pg_ctl', '-w', '-D', $test_standby_datadir,
-				   '-l', "$log_path/standby.log",
-				   '-o', "-p $port_standby", 'start');
+	$node_standby->startNode();
 
 	# The standby may have WAL to apply before it matches the primary.  That
 	# is fine, because no test examines the standby before promotion.
@@ -234,14 +216,14 @@ sub promote_standby
 	# Wait for the standby to receive and write all WAL.
 	my $wal_received_query =
 "SELECT pg_current_xlog_location() = write_location FROM pg_stat_replication WHERE application_name = 'rewind_standby';";
-	poll_query_until($wal_received_query, $connstr_master)
+	poll_query_until($wal_received_query, $node_master->getConnStr())
 	  or die "Timed out while waiting for standby to receive and write WAL";
 
 	# Now promote slave and insert some new data on master, this will put
 	# the master out-of-sync with the standby. Wait until the standby is
 	# out of recovery mode, and is ready to accept read-write connections.
-	system_or_bail('pg_ctl', '-w', '-D', $test_standby_datadir, 'promote');
-	poll_query_until("SELECT NOT pg_is_in_recovery()", $connstr_standby)
+	system_or_bail('pg_ctl', '-w', '-D', $node_standby->getDataDir(), 'promote');
+	poll_query_until("SELECT NOT pg_is_in_recovery()", $node_standby->getConnStr())
 	  or die "Timed out while waiting for promotion of standby";
 
 	# Force a checkpoint after the promotion. pg_rewind looks at the control
@@ -256,9 +238,13 @@ sub promote_standby
 sub run_pg_rewind
 {
 	my $test_mode = shift;
+	my $master_pgdata = $node_master->getDataDir();
+	my $standby_pgdata = $node_standby->getDataDir();
+	my $standby_connstr = $node_standby->getConnStr('postgres');
+	my $tmp_folder = TestBase::tempdir;
 
 	# Stop the master and be ready to perform the rewind
-	system_or_bail('pg_ctl', '-D', $test_master_datadir, '-m', 'fast', 'stop');
+	$node_master->stopNode();
 
 	# At this point, the rewind processing is ready to run.
 	# We now have a very simple scenario with a few diverged WAL record.
@@ -267,20 +253,19 @@ sub run_pg_rewind
 
 	# Keep a temporary postgresql.conf for master node or it would be
 	# overwritten during the rewind.
-	copy("$test_master_datadir/postgresql.conf",
-		 "$tmp_check/master-postgresql.conf.tmp");
+	copy("$master_pgdata/postgresql.conf",
+		 "$tmp_folder/master-postgresql.conf.tmp");
 
 	# Now run pg_rewind
 	if ($test_mode eq "local")
 	{
 		# Do rewind using a local pgdata as source
 		# Stop the master and be ready to perform the rewind
-		system_or_bail('pg_ctl', '-D', $test_standby_datadir,
-					   '-m', 'fast', 'stop');
+		$node_standby->stopNode();
 		command_ok(['pg_rewind',
 					"--debug",
-					"--source-pgdata=$test_standby_datadir",
-					"--target-pgdata=$test_master_datadir"],
+					"--source-pgdata=$standby_pgdata",
+					"--target-pgdata=$master_pgdata"],
 				   'pg_rewind local');
 	}
 	elsif ($test_mode eq "remote")
@@ -289,33 +274,30 @@ sub run_pg_rewind
 		command_ok(['pg_rewind',
 					"--debug",
 					"--source-server",
-					"port=$port_standby dbname=postgres",
-					"--target-pgdata=$test_master_datadir"],
+					$standby_connstr,
+					"--target-pgdata=$master_pgdata"],
 				   'pg_rewind remote');
 	}
 	else
 	{
-
 		# Cannot come here normally
 		die("Incorrect test mode specified");
 	}
 
 	# Now move back postgresql.conf with old settings
-	move("$tmp_check/master-postgresql.conf.tmp",
-		 "$test_master_datadir/postgresql.conf");
+	move("$tmp_folder/master-postgresql.conf.tmp",
+		 "$master_pgdata/postgresql.conf");
 
 	# Plug-in rewound node to the now-promoted standby node
-	append_to_file(
-		"$test_master_datadir/recovery.conf", qq(
+	my $port_standby = $node_standby->getPort();
+	$node_master->appendConf('recovery.conf', qq(
 primary_conninfo='port=$port_standby'
 standby_mode=on
 recovery_target_timeline='latest'
 ));
 
 	# Restart the master to check that rewind went correctly
-	system_or_bail('pg_ctl', '-w', '-D', $test_master_datadir,
-				   '-l', "$log_path/master.log",
-				   '-o', "-p $port_master", 'start');
+	$node_master->restartNode();
 
 	#### Now run the test-specific parts to check the result
 }
@@ -323,22 +305,6 @@ recovery_target_timeline='latest'
 # Clean up after the test. Stop both servers, if they're still running.
 sub clean_rewind_test
 {
-	if ($test_master_datadir)
-	{
-		system
-		  'pg_ctl', '-D', $test_master_datadir, '-m', 'immediate', 'stop';
-	}
-	if ($test_standby_datadir)
-	{
-		system
-		  'pg_ctl', '-D', $test_standby_datadir, '-m', 'immediate', 'stop';
-	}
-}
-
-# Stop the test servers, just in case they're still running.
-END
-{
-	my $save_rc = $?;
-	clean_rewind_test();
-	$? = $save_rc;
+	teardown_node($node_master) if (defined($node_master));
+	teardown_node($node_standby) if (defined($node_standby));
 }
diff --git a/src/bin/pg_rewind/t/003_extrafiles.pl b/src/bin/pg_rewind/t/003_extrafiles.pl
index d317f53..8494121 100644
--- a/src/bin/pg_rewind/t/003_extrafiles.pl
+++ b/src/bin/pg_rewind/t/003_extrafiles.pl
@@ -2,6 +2,7 @@
 
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 4;
 
@@ -17,7 +18,7 @@ sub run_test
 	RewindTest::setup_cluster();
 	RewindTest::start_master();
 
-	my $test_master_datadir = $RewindTest::test_master_datadir;
+	my $test_master_datadir = $node_master->getDataDir();
 
 	# Create a subdir and files that will be present in both
 	mkdir "$test_master_datadir/tst_both_dir";
@@ -30,6 +31,7 @@ sub run_test
 	RewindTest::create_standby();
 
 	# Create different subdirs and files in master and standby
+	my $test_standby_datadir = $node_standby->getDataDir();
 
 	mkdir "$test_standby_datadir/tst_standby_dir";
 	append_to_file "$test_standby_datadir/tst_standby_dir/standby_file1",
diff --git a/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl b/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl
index c5f72e2..4870f6a 100644
--- a/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl
+++ b/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl
@@ -5,6 +5,7 @@ use strict;
 use warnings;
 use File::Copy;
 use File::Path qw(rmtree);
+use TestBase;
 use TestLib;
 use Test::More;
 if ($windows_os)
@@ -28,6 +29,8 @@ sub run_test
 	rmtree($master_xlogdir);
 	RewindTest::setup_cluster();
 
+	my $test_master_datadir = $node_master->getDataDir();
+
 	# turn pg_xlog into a symlink
 	print("moving $test_master_datadir/pg_xlog to $master_xlogdir\n");
 	move("$test_master_datadir/pg_xlog", $master_xlogdir) or die;
diff --git a/src/bin/scripts/t/010_clusterdb.pl b/src/bin/scripts/t/010_clusterdb.pl
index dc0d78a..80b3cd9 100644
--- a/src/bin/scripts/t/010_clusterdb.pl
+++ b/src/bin/scripts/t/010_clusterdb.pl
@@ -1,5 +1,6 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 13;
 
@@ -7,20 +8,24 @@ program_help_ok('clusterdb');
 program_version_ok('clusterdb');
 program_options_handling_ok('clusterdb');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
-issues_sql_like(
-	[ 'clusterdb', 'postgres' ],
+$ENV{PGPORT} = $node->getPort();
+$ENV{PGDATABASE} = 'postgres';
+
+issues_sql_like($node,
+	[ 'clusterdb' ],
 	qr/statement: CLUSTER;/,
 	'SQL CLUSTER run');
 
-command_fails([ 'clusterdb', '-t', 'nonexistent', 'postgres' ],
-	'fails with nonexistent table');
+command_fails([ 'clusterdb', '-t', 'nonexistent' ],
+			  'fails with nonexistent table');
 
 psql 'postgres',
-'CREATE TABLE test1 (a int); CREATE INDEX test1x ON test1 (a); CLUSTER test1 USING test1x';
-issues_sql_like(
-	[ 'clusterdb', '-t', 'test1', 'postgres' ],
+	'CREATE TABLE test1 (a int); CREATE INDEX test1x ON test1 (a); CLUSTER test1 USING test1x';
+issues_sql_like($node,
+	[ 'clusterdb', '-t', 'test1' ],
 	qr/statement: CLUSTER test1;/,
 	'cluster specific table');
diff --git a/src/bin/scripts/t/011_clusterdb_all.pl b/src/bin/scripts/t/011_clusterdb_all.pl
index 7769f70..b66ade3 100644
--- a/src/bin/scripts/t/011_clusterdb_all.pl
+++ b/src/bin/scripts/t/011_clusterdb_all.pl
@@ -1,12 +1,19 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 2;
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
-issues_sql_like(
+# cluster -a is not compatible with -d, hence enforce environment variables
+# correctly.
+$ENV{PGDATABASE} = 'postgres';
+$ENV{PGPORT} = $node->getPort();
+
+issues_sql_like($node,
 	[ 'clusterdb', '-a' ],
 	qr/statement: CLUSTER.*statement: CLUSTER/s,
 	'cluster all databases');
diff --git a/src/bin/scripts/t/020_createdb.pl b/src/bin/scripts/t/020_createdb.pl
index a44283c..5d502b1 100644
--- a/src/bin/scripts/t/020_createdb.pl
+++ b/src/bin/scripts/t/020_createdb.pl
@@ -1,5 +1,6 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 13;
 
@@ -7,14 +8,17 @@ program_help_ok('createdb');
 program_version_ok('createdb');
 program_options_handling_ok('createdb');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
-issues_sql_like(
+$ENV{PGPORT} = $node->getPort();
+
+issues_sql_like($node,
 	[ 'createdb', 'foobar1' ],
 	qr/statement: CREATE DATABASE foobar1/,
 	'SQL CREATE DATABASE run');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'createdb', '-l', 'C', '-E', 'LATIN1', '-T', 'template0', 'foobar2' ],
 	qr/statement: CREATE DATABASE foobar2 ENCODING 'LATIN1'/,
 	'create database with encoding');
diff --git a/src/bin/scripts/t/030_createlang.pl b/src/bin/scripts/t/030_createlang.pl
index 7ff0a3e..2bfc540 100644
--- a/src/bin/scripts/t/030_createlang.pl
+++ b/src/bin/scripts/t/030_createlang.pl
@@ -1,5 +1,6 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 14;
 
@@ -7,18 +8,22 @@ program_help_ok('createlang');
 program_version_ok('createlang');
 program_options_handling_ok('createlang');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
+
+$ENV{PGPORT} = $node->getPort();
+$ENV{PGDATABASE} = 'postgres';
 
 command_fails(
-	[ 'createlang', 'plpgsql', 'postgres' ],
+	[ 'createlang', 'plpgsql' ],
 	'fails if language already exists');
 
-psql 'postgres', 'DROP EXTENSION plpgsql';
-issues_sql_like(
-	[ 'createlang', 'plpgsql', 'postgres' ],
+psql $node->getConnStr('postgres'), 'DROP EXTENSION plpgsql';
+issues_sql_like($node,
+	[ 'createlang', 'plpgsql' ],
 	qr/statement: CREATE EXTENSION "plpgsql"/,
 	'SQL CREATE EXTENSION run');
 
-command_like([ 'createlang', '--list', 'postgres' ],
+command_like([ 'createlang', '--list' ],
 	qr/plpgsql/, 'list output');
diff --git a/src/bin/scripts/t/040_createuser.pl b/src/bin/scripts/t/040_createuser.pl
index 4d44e14..0238b2f 100644
--- a/src/bin/scripts/t/040_createuser.pl
+++ b/src/bin/scripts/t/040_createuser.pl
@@ -1,5 +1,6 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 17;
 
@@ -7,24 +8,29 @@ program_help_ok('createuser');
 program_version_ok('createuser');
 program_options_handling_ok('createuser');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
-issues_sql_like(
+$ENV{PGDATABASE} = 'postgres';
+$ENV{PGPORT} = $node->getPort();
+
+issues_sql_like($node,
 	[ 'createuser', 'user1' ],
 qr/statement: CREATE ROLE user1 NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN;/,
 	'SQL CREATE USER run');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'createuser', '-L', 'role1' ],
 qr/statement: CREATE ROLE role1 NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT NOLOGIN;/,
 	'create a non-login role');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'createuser', '-r', 'user2' ],
 qr/statement: CREATE ROLE user2 NOSUPERUSER NOCREATEDB CREATEROLE INHERIT LOGIN;/,
 	'create a CREATEROLE user');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'createuser', '-s', 'user3' ],
 qr/statement: CREATE ROLE user3 SUPERUSER CREATEDB CREATEROLE INHERIT LOGIN;/,
 	'create a superuser');
 
-command_fails([ 'createuser', 'user1' ], 'fails if role already exists');
+command_fails([ 'createuser', 'user1' ],
+			  'fails if role already exists');
diff --git a/src/bin/scripts/t/050_dropdb.pl b/src/bin/scripts/t/050_dropdb.pl
index 3065e50..fc11e46 100644
--- a/src/bin/scripts/t/050_dropdb.pl
+++ b/src/bin/scripts/t/050_dropdb.pl
@@ -1,5 +1,6 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 11;
 
@@ -7,11 +8,14 @@ program_help_ok('dropdb');
 program_version_ok('dropdb');
 program_options_handling_ok('dropdb');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
-psql 'postgres', 'CREATE DATABASE foobar1';
-issues_sql_like(
+$ENV{PGPORT} = $node->getPort();
+
+psql $node->getConnStr('postgres'), 'CREATE DATABASE foobar1';
+issues_sql_like($node,
 	[ 'dropdb', 'foobar1' ],
 	qr/statement: DROP DATABASE foobar1/,
 	'SQL DROP DATABASE run');
diff --git a/src/bin/scripts/t/060_droplang.pl b/src/bin/scripts/t/060_droplang.pl
index 6a21d7e..bccd9ce 100644
--- a/src/bin/scripts/t/060_droplang.pl
+++ b/src/bin/scripts/t/060_droplang.pl
@@ -1,5 +1,6 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 11;
 
@@ -7,10 +8,13 @@ program_help_ok('droplang');
 program_version_ok('droplang');
 program_options_handling_ok('droplang');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
-issues_sql_like(
+$ENV{PGPORT} = $node->getPort();
+
+issues_sql_like($node,
 	[ 'droplang', 'plpgsql', 'postgres' ],
 	qr/statement: DROP EXTENSION "plpgsql"/,
 	'SQL DROP EXTENSION run');
diff --git a/src/bin/scripts/t/070_dropuser.pl b/src/bin/scripts/t/070_dropuser.pl
index bbb3b79..9d5bef7 100644
--- a/src/bin/scripts/t/070_dropuser.pl
+++ b/src/bin/scripts/t/070_dropuser.pl
@@ -1,5 +1,6 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 11;
 
@@ -7,11 +8,14 @@ program_help_ok('dropuser');
 program_version_ok('dropuser');
 program_options_handling_ok('dropuser');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
+
+$ENV{PGPORT} = $node->getPort();
 
 psql 'postgres', 'CREATE ROLE foobar1';
-issues_sql_like(
+issues_sql_like($node,
 	[ 'dropuser', 'foobar1' ],
 	qr/statement: DROP ROLE foobar1/,
 	'SQL DROP ROLE run');
diff --git a/src/bin/scripts/t/080_pg_isready.pl b/src/bin/scripts/t/080_pg_isready.pl
index f432505..47aef9b 100644
--- a/src/bin/scripts/t/080_pg_isready.pl
+++ b/src/bin/scripts/t/080_pg_isready.pl
@@ -1,5 +1,6 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 10;
 
@@ -9,7 +10,10 @@ program_options_handling_ok('pg_isready');
 
 command_fails(['pg_isready'], 'fails with no server running');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
+
+$ENV{PGPORT} = $node->getPort();
 
 command_ok(['pg_isready'], 'succeeds with server running');
diff --git a/src/bin/scripts/t/090_reindexdb.pl b/src/bin/scripts/t/090_reindexdb.pl
index 42628c2..9bcf7ec 100644
--- a/src/bin/scripts/t/090_reindexdb.pl
+++ b/src/bin/scripts/t/090_reindexdb.pl
@@ -1,5 +1,6 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 20;
 
@@ -7,35 +8,37 @@ program_help_ok('reindexdb');
 program_version_ok('reindexdb');
 program_options_handling_ok('reindexdb');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
+$ENV{PGPORT} = $node->getPort();
 $ENV{PGOPTIONS} = '--client-min-messages=WARNING';
 
-issues_sql_like(
+issues_sql_like($node,
 	[ 'reindexdb', 'postgres' ],
 	qr/statement: REINDEX DATABASE postgres;/,
 	'SQL REINDEX run');
 
 psql 'postgres',
   'CREATE TABLE test1 (a int); CREATE INDEX test1x ON test1 (a);';
-issues_sql_like(
+issues_sql_like($node,
 	[ 'reindexdb', '-t', 'test1', 'postgres' ],
 	qr/statement: REINDEX TABLE test1;/,
 	'reindex specific table');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'reindexdb', '-i', 'test1x', 'postgres' ],
 	qr/statement: REINDEX INDEX test1x;/,
 	'reindex specific index');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'reindexdb', '-S', 'pg_catalog', 'postgres' ],
 	qr/statement: REINDEX SCHEMA pg_catalog;/,
 	'reindex specific schema');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'reindexdb', '-s', 'postgres' ],
 	qr/statement: REINDEX SYSTEM postgres;/,
 	'reindex system tables');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'reindexdb', '-v', '-t', 'test1', 'postgres' ],
 	qr/statement: REINDEX \(VERBOSE\) TABLE test1;/,
 	'reindex with verbose output');
diff --git a/src/bin/scripts/t/091_reindexdb_all.pl b/src/bin/scripts/t/091_reindexdb_all.pl
index ffadf29..732ed1d 100644
--- a/src/bin/scripts/t/091_reindexdb_all.pl
+++ b/src/bin/scripts/t/091_reindexdb_all.pl
@@ -1,14 +1,17 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 2;
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
+$ENV{PGPORT} = $node->getPort();
 $ENV{PGOPTIONS} = '--client-min-messages=WARNING';
 
-issues_sql_like(
+issues_sql_like($node,
 	[ 'reindexdb', '-a' ],
 	qr/statement: REINDEX.*statement: REINDEX/s,
 	'reindex all databases');
diff --git a/src/bin/scripts/t/100_vacuumdb.pl b/src/bin/scripts/t/100_vacuumdb.pl
index ac160ba..5b825f1 100644
--- a/src/bin/scripts/t/100_vacuumdb.pl
+++ b/src/bin/scripts/t/100_vacuumdb.pl
@@ -1,5 +1,6 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 18;
 
@@ -7,26 +8,29 @@ program_help_ok('vacuumdb');
 program_version_ok('vacuumdb');
 program_options_handling_ok('vacuumdb');
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
-issues_sql_like(
+$ENV{PGPORT} = $node->getPort();
+
+issues_sql_like($node,
 	[ 'vacuumdb', 'postgres' ],
 	qr/statement: VACUUM;/,
 	'SQL VACUUM run');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'vacuumdb', '-f', 'postgres' ],
 	qr/statement: VACUUM \(FULL\);/,
 	'vacuumdb -f');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'vacuumdb', '-F', 'postgres' ],
 	qr/statement: VACUUM \(FREEZE\);/,
 	'vacuumdb -F');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'vacuumdb', '-z', 'postgres' ],
 	qr/statement: VACUUM \(ANALYZE\);/,
 	'vacuumdb -z');
-issues_sql_like(
+issues_sql_like($node,
 	[ 'vacuumdb', '-Z', 'postgres' ],
 	qr/statement: ANALYZE;/,
 	'vacuumdb -Z');
diff --git a/src/bin/scripts/t/101_vacuumdb_all.pl b/src/bin/scripts/t/101_vacuumdb_all.pl
index e90f321..829d7c9 100644
--- a/src/bin/scripts/t/101_vacuumdb_all.pl
+++ b/src/bin/scripts/t/101_vacuumdb_all.pl
@@ -1,12 +1,16 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 2;
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
-issues_sql_like(
+$ENV{PGPORT} = $node->getPort();
+
+issues_sql_like($node,
 	[ 'vacuumdb', '-a' ],
 	qr/statement: VACUUM.*statement: VACUUM/s,
 	'vacuum all databases');
diff --git a/src/bin/scripts/t/102_vacuumdb_stages.pl b/src/bin/scripts/t/102_vacuumdb_stages.pl
index 57b980e..db3288d 100644
--- a/src/bin/scripts/t/102_vacuumdb_stages.pl
+++ b/src/bin/scripts/t/102_vacuumdb_stages.pl
@@ -1,12 +1,16 @@
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use Test::More tests => 4;
 
-my $tempdir = tempdir;
-start_test_server $tempdir;
+my $node = get_new_node();
+$node->initNode();
+$node->startNode();
 
-issues_sql_like(
+$ENV{PGPORT} = $node->getPort();
+
+issues_sql_like($node,
 	[ 'vacuumdb', '--analyze-in-stages', 'postgres' ],
 qr/.*statement:\ SET\ default_statistics_target=1;\ SET\ vacuum_cost_delay=0;
                    .*statement:\ ANALYZE.*
@@ -17,7 +21,7 @@ qr/.*statement:\ SET\ default_statistics_target=1;\ SET\ vacuum_cost_delay=0;
 	'analyze three times');
 
 
-issues_sql_like(
+issues_sql_like($node,
 	[ 'vacuumdb', '--analyze-in-stages', '--all' ],
 qr/.*statement:\ SET\ default_statistics_target=1;\ SET\ vacuum_cost_delay=0;
                    .*statement:\ ANALYZE.*
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
new file mode 100644
index 0000000..ff0e90d
--- /dev/null
+++ b/src/test/perl/PostgresNode.pm
@@ -0,0 +1,233 @@
+# PostgresNode, simple node representation for regression tests
+#
+# Regression tests should use this basic class infrastructure to define
+# nodes that need used in the complex scenarios. This object is wanted
+# simple with only a basic set of routines able to configure, initialize
+# and manage a node.
+
+package PostgresNode;
+
+use strict;
+use warnings;
+
+use RecursiveCopy;
+use TestBase;
+
+sub new {
+	my $class = shift;
+	my $pghost = shift;
+	my $pgport = shift;
+	my $self = {
+		_port => undef,
+		_host => undef,
+		_basedir => undef,
+		_applname => undef,
+		_logfile => undef
+			};
+
+	# Set up each field
+	$self->{_port} = $pgport;
+	$self->{_host} = $pghost;
+	$self->{_basedir} = TestBase::tempdir;
+	$self->{_applname} = "node_$pgport";
+	$self->{_logfile} = "$log_path/node_$pgport.log";
+	bless $self, $class;
+	$self->dumpNodeInfo();
+	return $self;
+}
+
+# Get routines for various variables
+sub getPort {
+	my ( $self ) = @_;
+	return $self->{_port};
+}
+sub getHost {
+	my ( $self ) = @_;
+	return $self->{_host};
+}
+sub getConnStr {
+	my ( $self, $dbname ) = @_;
+	my $pgport = $self->getPort();
+	my $pghost = $self->getHost();
+	if (!defined($dbname))
+	{
+		return "port=$pgport host=$pghost";
+	}
+	return "port=$pgport host=$pghost dbname=$dbname";
+}
+sub getDataDir {
+	my ( $self ) = @_;
+	my $basedir = $self->{_basedir};
+	return "$basedir/pgdata";
+}
+sub getApplName {
+	my ( $self ) = @_;
+	return $self->{_applname};
+}
+sub getLogFile {
+	my ( $self ) = @_;
+	return $self->{_logfile};
+}
+sub getArchiveDir {
+	my ( $self ) = @_;
+	my $basedir = $self->{_basedir};
+	return "$basedir/archives";
+}
+sub getBackupDir {
+	my ( $self ) = @_;
+	my $basedir = $self->{_basedir};
+	return "$basedir/backup";
+}
+
+# Dump node information
+sub dumpNodeInfo {
+	my ( $self ) = @_;
+	print 'Data directory: ' . $self->getDataDir() . "\n";
+	print 'Backup directory: ' . $self->getBackupDir() . "\n";
+	print 'Archive directory: ' . $self->getArchiveDir() . "\n";
+	print 'Connection string: ' . $self->getConnStr() . "\n";
+	print 'Application name: ' . $self->getApplName() . "\n";
+	print 'Log file: ' . $self->getLogFile() . "\n";
+}
+
+sub setReplicationConf
+{
+	my ( $self ) = @_;
+	my $pgdata = $self->getDataDir();
+
+	open my $hba, ">>$pgdata/pg_hba.conf";
+	print $hba "\n# Allow replication (set up by PostgresNode.pm)\n";
+	if (! $windows_os)
+	{
+		print $hba "local replication all trust\n";
+	}
+	else
+    {
+		print $hba "host replication all 127.0.0.1/32 sspi include_realm=1 map=regress\n";
+	}
+	close $hba;
+
+}
+
+# Initialize a new cluster for testing.
+#
+# Authentication is set up so that only the current OS user can access the
+# cluster. On Unix, we use Unix domain socket connections, with the socket in
+# a directory that's only accessible to the current user to ensure that.
+# On Windows, we use SSPI authentication to ensure the same (by pg_regress
+# --config-auth).
+sub initNode
+{
+	my ( $self, $repconf ) = @_;
+	my $port = $self->getPort();
+	my $pgdata = $self->getDataDir();
+	my $host = $self->getHost();
+
+	$repconf = 1 if (!defined($repconf));
+
+	mkdir $self->getBackupDir();
+	mkdir $self->getArchiveDir();
+
+	system_or_bail('initdb', '-D', $pgdata, '-A' , 'trust', '-N');
+	system_or_bail($ENV{PG_REGRESS}, '--config-auth', $pgdata);
+
+	open my $conf, ">>$pgdata/postgresql.conf";
+	print $conf "\n# Added by TestLib.pm)\n";
+	print $conf "fsync = off\n";
+	print $conf "log_statement = all\n";
+	print $conf "port = $port\n";
+	if ($windows_os)
+    {
+		print $conf "listen_addresses = '$host'\n";
+	}
+	else
+	{
+		print $conf "unix_socket_directories = '$host'\n";
+		print $conf "listen_addresses = ''\n";
+	}
+	close $conf;
+
+	$self->setReplicationConf() if ($repconf);
+}
+sub appendConf
+{
+	my ($self, $filename, $str) = @_;
+
+	my $conffile = $self->getDataDir() . '/' . $filename;
+
+	open my $fh, ">>", $conffile or die "could not open file $filename";
+	print $fh $str;
+	close $fh;
+}
+sub backupNode
+{
+	my ($self, $backup_name) = @_;
+    my $backup_path = $self->getBackupDir() . '/' . $backup_name;
+	my $port = $self->getPort();
+
+	print "# Taking backup $backup_name from node with port $port\n";
+	system_or_bail("pg_basebackup -D $backup_path -p $port -x");
+	print "# Backup finished\n";
+}
+sub initNodeFromBackup
+{
+	my ( $self, $root_node, $backup_name ) = @_;
+	my $backup_path = $root_node->getBackupDir() . '/' . $backup_name;
+	my $port = $self->getPort();
+	my $root_port = $root_node->getPort();
+
+	print "Initializing node $port from backup \"$backup_name\" of node $root_port\n";
+	die "Backup $backup_path does not exist" if (! -d $backup_path);
+
+	mkdir $self->getBackupDir();
+	mkdir $self->getArchiveDir();
+
+	my $data_path = $self->getDataDir();
+    rmdir($data_path);
+    RecursiveCopy::copypath($backup_path, $data_path);
+    chmod(0700, $data_path);
+
+	# Base configuration for this node
+	$self->appendConf('postgresql.conf', qq(
+port = $port
+));
+	$self->setReplicationConf();
+}
+sub startNode
+{
+	my ( $self ) = @_;
+	my $port = $self->getPort();
+	my $pgdata = $self->getDataDir();
+	print("### Starting test server in $pgdata\n");
+	my $ret = system_log('pg_ctl', '-w', '-D', $self->getDataDir(),
+						 '-l', $self->getLogFile(),
+						 'start');
+
+	if ($ret != 0)
+	{
+		print "# pg_ctl failed; logfile:\n";
+		system('cat', $self->getLogFile());
+		BAIL_OUT("pg_ctl failed");
+	}
+}
+sub stopNode
+{
+	my ( $self, $mode ) = @_;
+	my $port = $self->getPort();
+	my $pgdata = $self->getDataDir();
+	$mode = 'fast' if (!defined($mode));
+	print "### Stopping node in $pgdata with port $port using mode $mode\n";
+	system_log('pg_ctl', '-D', $pgdata, '-m',
+			   $mode, 'stop');
+}
+sub restartNode
+{
+	my ( $self ) = @_;
+	my $port = $self->getPort();
+	my $pgdata = $self->getDataDir();
+	my $logfile = $self->getLogFile();
+	system_log('pg_ctl', '-D', $pgdata, '-w', '-l',
+			   $logfile, 'restart');
+}
+
+1;
diff --git a/src/test/perl/RecursiveCopy.pm b/src/test/perl/RecursiveCopy.pm
new file mode 100644
index 0000000..f06f975
--- /dev/null
+++ b/src/test/perl/RecursiveCopy.pm
@@ -0,0 +1,42 @@
+# RecursiveCopy, a simple recursive copy implementation
+#
+# Having this implementation has the advantage to not have the regression
+# test code rely on any external module for this simple operation.
+
+package RecursiveCopy;
+use strict;
+use warnings;
+
+use File::Basename;
+use File::Copy;
+
+sub copypath {
+	my $srcpath = shift;
+	my $destpath = shift;
+
+	die "Cannot operate on symlinks" if ( -l $srcpath || -l $destpath );
+
+	# This source path is a file, simply copy it to destination with the
+	# same name.
+	die "Destination path $destpath exists as file" if ( -f $destpath );
+	if ( -f $srcpath )
+	{
+		my $filename = basename($destpath);
+		copy($srcpath, "$destpath");
+		return 1;
+	}
+
+	die "Destination needs to be a directory" if (! -d $srcpath);
+	mkdir($destpath);
+
+	# Scan existing source directory and recursively copy everything.
+	opendir(my $directory, $srcpath);
+	while (my $entry = readdir($directory)) {
+		next if ($entry eq '.' || $entry eq '..');
+		RecursiveCopy::copypath("$srcpath/$entry", "$destpath/$entry");
+	}
+	closedir($directory);
+	return 1;
+}
+
+1;
diff --git a/src/test/perl/TestBase.pm b/src/test/perl/TestBase.pm
new file mode 100644
index 0000000..3a61939
--- /dev/null
+++ b/src/test/perl/TestBase.pm
@@ -0,0 +1,109 @@
+# Set of low-level routines dedicated to base tasks for regression
+# tests, like command execution and logging. This module should not
+# have dependencies with other modules dedicated to regression tests
+# of Postgres.
+
+package TestBase;
+
+use strict;
+use warnings;
+
+use Config;
+use Exporter 'import';
+use File::Basename;
+use File::Spec;
+use File::Temp ();
+use IPC::Run qw(run start);
+
+use Test::More;
+
+our @EXPORT = qw(
+  system_or_bail
+  system_log
+  run_log
+
+  $tmp_check
+  $log_path
+  $windows_os
+);
+
+# Internal modules
+use SimpleTee;
+
+our $windows_os = $Config{osname} eq 'MSWin32' || $Config{osname} eq 'msys';
+
+# Open log file. For each test, the log file name uses the name of the
+# file launching this module, without the .pl suffix.
+our ($tmp_check, $log_path);
+$tmp_check = $ENV{TESTDIR} ? "$ENV{TESTDIR}/tmp_check" : "tmp_check";
+$log_path = "$tmp_check/log";
+mkdir $tmp_check;
+mkdir $log_path;
+my $test_logfile = basename($0);
+$test_logfile =~ s/\.[^.]+$//;
+$test_logfile = "$log_path/regress_log_$test_logfile";
+open TESTLOG, '>', $test_logfile or die "Cannot open STDOUT to logfile: $!";
+
+# Hijack STDOUT and STDERR to the log file
+open(ORIG_STDOUT, ">&STDOUT");
+open(ORIG_STDERR, ">&STDERR");
+open(STDOUT, ">&TESTLOG");
+open(STDERR, ">&TESTLOG");
+
+# The test output (ok ...) needs to be printed to the original STDOUT so
+# that the 'prove' program can parse it, and display it to the user in
+# real time. But also copy it to the log file, to provide more context
+# in the log.
+my $builder = Test::More->builder;
+my $fh = $builder->output;
+tie *$fh, "SimpleTee", *ORIG_STDOUT, *TESTLOG;
+$fh = $builder->failure_output;
+tie *$fh, "SimpleTee", *ORIG_STDERR, *TESTLOG;
+
+# Enable auto-flushing for all the file handles. Stderr and stdout are
+# redirected to the same file, and buffering causes the lines to appear
+# in the log in confusing order.
+autoflush STDOUT 1;
+autoflush STDERR 1;
+autoflush TESTLOG 1;
+
+#
+# Helper functions
+#
+sub tempdir
+{
+	return File::Temp::tempdir(
+		'tmp_testXXXX',
+		DIR => $ENV{TESTDIR} || cwd(),
+		CLEANUP => 1);
+}
+
+sub tempdir_short
+{
+
+	# Use a separate temp dir outside the build tree for the
+	# Unix-domain socket, to avoid file name length issues.
+	return File::Temp::tempdir(CLEANUP => 1);
+}
+
+sub system_or_bail
+{
+	if (system_log(@_) != 0)
+	{
+		BAIL_OUT("system $_[0] failed: $?");
+	}
+}
+
+sub system_log
+{
+	print("# Running: " . join(" ", @_) ."\n");
+	return system(@_);
+}
+
+sub run_log
+{
+	print("# Running: " . join(" ", @{$_[0]}) ."\n");
+	return run (@_);
+}
+
+1;
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm
index 02533eb..a1473c6 100644
--- a/src/test/perl/TestLib.pm
+++ b/src/test/perl/TestLib.pm
@@ -6,18 +6,11 @@ use warnings;
 use Config;
 use Exporter 'import';
 our @EXPORT = qw(
-  tempdir
-  tempdir_short
-  standard_initdb
-  configure_hba_for_replication
-  start_test_server
-  restart_test_server
+  get_new_node
+  teardown_node
   psql
   slurp_dir
   slurp_file
-  system_or_bail
-  system_log
-  run_log
 
   command_ok
   command_fails
@@ -27,10 +20,6 @@ our @EXPORT = qw(
   program_options_handling_ok
   command_like
   issues_sql_like
-
-  $tmp_check
-  $log_path
-  $windows_os
 );
 
 use Cwd;
@@ -39,46 +28,10 @@ use File::Spec;
 use File::Temp ();
 use IPC::Run qw(run start);
 
-use SimpleTee;
-
 use Test::More;
 
-our $windows_os = $Config{osname} eq 'MSWin32' || $Config{osname} eq 'msys';
-
-# Open log file. For each test, the log file name uses the name of the
-# file launching this module, without the .pl suffix.
-our ($tmp_check, $log_path);
-$tmp_check = $ENV{TESTDIR} ? "$ENV{TESTDIR}/tmp_check" : "tmp_check";
-$log_path = "$tmp_check/log";
-mkdir $tmp_check;
-mkdir $log_path;
-my $test_logfile = basename($0);
-$test_logfile =~ s/\.[^.]+$//;
-$test_logfile = "$log_path/regress_log_$test_logfile";
-open TESTLOG, '>', $test_logfile or die "Cannot open STDOUT to logfile: $!";
-
-# Hijack STDOUT and STDERR to the log file
-open(ORIG_STDOUT, ">&STDOUT");
-open(ORIG_STDERR, ">&STDERR");
-open(STDOUT, ">&TESTLOG");
-open(STDERR, ">&TESTLOG");
-
-# The test output (ok ...) needs to be printed to the original STDOUT so
-# that the 'prove' program can parse it, and display it to the user in
-# real time. But also copy it to the log file, to provide more context
-# in the log.
-my $builder = Test::More->builder;
-my $fh = $builder->output;
-tie *$fh, "SimpleTee", *ORIG_STDOUT, *TESTLOG;
-$fh = $builder->failure_output;
-tie *$fh, "SimpleTee", *ORIG_STDERR, *TESTLOG;
-
-# Enable auto-flushing for all the file handles. Stderr and stdout are
-# redirected to the same file, and buffering causes the lines to appear
-# in the log in confusing order.
-autoflush STDOUT 1;
-autoflush STDERR 1;
-autoflush TESTLOG 1;
+# Internal modules
+use TestBase;
 
 # Set to untranslated messages, to be able to compare program output
 # with expected strings.
@@ -94,129 +47,74 @@ delete $ENV{PGREQUIRESSL};
 delete $ENV{PGSERVICE};
 delete $ENV{PGSSLMODE};
 delete $ENV{PGUSER};
-
-if (!$ENV{PGPORT})
-{
-	$ENV{PGPORT} = 65432;
-}
-
-$ENV{PGPORT} = int($ENV{PGPORT}) % 65536;
-
-
-#
-# Helper functions
-#
-
-
-sub tempdir
-{
-	return File::Temp::tempdir(
-		'tmp_testXXXX',
-		DIR => $ENV{TESTDIR} || cwd(),
-		CLEANUP => 1);
-}
-
-sub tempdir_short
-{
-
-	# Use a separate temp dir outside the build tree for the
-	# Unix-domain socket, to avoid file name length issues.
-	return File::Temp::tempdir(CLEANUP => 1);
-}
-
-# Initialize a new cluster for testing.
-#
-# The PGHOST environment variable is set to connect to the new cluster.
-#
-# Authentication is set up so that only the current OS user can access the
-# cluster. On Unix, we use Unix domain socket connections, with the socket in
-# a directory that's only accessible to the current user to ensure that.
-# On Windows, we use SSPI authentication to ensure the same (by pg_regress
-# --config-auth).
-sub standard_initdb
-{
-	my $pgdata = shift;
-	system_or_bail('initdb', '-D', "$pgdata", '-A' , 'trust', '-N');
-	system_or_bail($ENV{PG_REGRESS}, '--config-auth', $pgdata);
-
-	my $tempdir_short = tempdir_short;
-
-	open CONF, ">>$pgdata/postgresql.conf";
-	print CONF "\n# Added by TestLib.pm)\n";
-	print CONF "fsync = off\n";
-	if ($windows_os)
+delete $ENV{PGPORT};
+delete $ENV{PGHOST};
+
+# PGHOST is set once and for all through a single series of tests
+# when this module is loaded.
+my $test_pghost = $windows_os ? "127.0.0.1" : TestBase::tempdir_short;
+$ENV{PGHOST} = $test_pghost;
+$ENV{PGDATABASE} = 'postgres';
+
+# Tracking of last port value assigned to accelerate free port lookup.
+# XXX: Should this part use PG_VERSION_NUM?
+my $last_port_assigned =  90600 % 16384 + 49152;
+# Tracker of active nodes
+my @active_nodes = ();
+
+# get free port
+# register nodes in array
+# Get a port number not in use currently for a new node
+# As port number retrieval is based on the nodes currently running and
+# their presence in the list of registered ports, be sure that the node
+# that is consuming this port number has already been started and that
+# it is not registered yet.
+sub get_new_node
+{
+	my $found = 0;
+	my $port = $last_port_assigned;
+
+	while ($found == 0)
 	{
-		print CONF "listen_addresses = '127.0.0.1'\n";
+		$port++;
+		print "# Checking for port $port\n";
+		my $devnull = $windows_os ? "nul" : "/dev/null";
+		if (!run_log(['pg_isready', '-p', $port]))
+		{
+			$found = 1;
+			# Found a potential candidate, check first that it is
+			# not included in the list of registered nodes.
+			foreach my $node (@active_nodes)
+			{
+				$found = 0 if ($node->getPort() == $port);
+			}
+		}
 	}
-	else
-	{
-		print CONF "unix_socket_directories = '$tempdir_short'\n";
-		print CONF "listen_addresses = ''\n";
-	}
-	close CONF;
-
-	$ENV{PGHOST}         = $windows_os ? "127.0.0.1" : $tempdir_short;
-}
 
-# Set up the cluster to allow replication connections, in the same way that
-# standard_initdb does for normal connections.
-sub configure_hba_for_replication
-{
-	my $pgdata = shift;
+	print "# Found free port $port\n";
+	# Lock port number found by creating a new node
+	my $node = new PostgresNode($test_pghost, $port);
 
-	open HBA, ">>$pgdata/pg_hba.conf";
-	print HBA "\n# Allow replication (set up by TestLib.pm)\n";
-	if (! $windows_os)
-	{
-		print HBA "local replication all trust\n";
-	}
-	else
-	{
-		print HBA "host replication all 127.0.0.1/32 sspi include_realm=1 map=regress\n";
-	}
-	close HBA;
+	# Add node to list of nodes currently in use
+	push(@active_nodes, $node);
+	$last_port_assigned = $port;
+	return $node;
 }
 
-my ($test_server_datadir, $test_server_logfile);
-
-
-# Initialize a new cluster for testing in given directory, and start it.
-sub start_test_server
+# Remove any traces of given node.
+sub teardown_node
 {
-	my ($tempdir) = @_;
-	my $ret;
+	my $node = shift;
 
-	print("### Starting test server in $tempdir\n");
-	standard_initdb "$tempdir/pgdata";
-
-	$ret = system_log('pg_ctl', '-D', "$tempdir/pgdata", '-w', '-l',
-	  "$log_path/postmaster.log", '-o', "--log-statement=all",
-	  'start');
-
-	if ($ret != 0)
-	{
-		print "# pg_ctl failed; logfile:\n";
-		system('cat', "$log_path/postmaster.log");
-		BAIL_OUT("pg_ctl failed");
-	}
-
-	$test_server_datadir = "$tempdir/pgdata";
-	$test_server_logfile = "$log_path/postmaster.log";
-}
-
-sub restart_test_server
-{
-	print("### Restarting test server\n");
-	system_log('pg_ctl', '-D', $test_server_datadir, '-w', '-l',
-	  $test_server_logfile, 'restart');
+	$node->stopNode('immediate');
+	@active_nodes = grep { $_ ne $node } @active_nodes;
 }
 
 END
 {
-	if ($test_server_datadir)
+	foreach my $node (@active_nodes)
 	{
-		system_log('pg_ctl', '-D', $test_server_datadir, '-m',
-		  'immediate', 'stop');
+		teardown_node($node);
 	}
 }
 
@@ -226,6 +124,12 @@ sub psql
 	my ($stdout, $stderr);
 	print("# Running SQL command: $sql\n");
 	run [ 'psql', '-X', '-A', '-t', '-q', '-d', $dbname, '-f', '-' ], '<', \$sql, '>', \$stdout, '2>', \$stderr or die;
+	if ($stderr ne "")
+	{
+		print "#### Begin standard error\n";
+		print $stderr;
+		print "#### End standard error\n";
+	}
 	chomp $stdout;
 	$stdout =~ s/\r//g if $Config{osname} eq 'msys';
 	return $stdout;
@@ -249,32 +153,10 @@ sub slurp_file
 	return $contents;
 }
 
-sub system_or_bail
-{
-	if (system_log(@_) != 0)
-	{
-		BAIL_OUT("system $_[0] failed: $?");
-	}
-}
-
-sub system_log
-{
-	print("# Running: " . join(" ", @_) ."\n");
-	return system(@_);
-}
-
-sub run_log
-{
-	print("# Running: " . join(" ", @{$_[0]}) ."\n");
-	return run (@_);
-}
-
 
 #
 # Test functions
 #
-
-
 sub command_ok
 {
 	my ($cmd, $test_name) = @_;
@@ -354,11 +236,11 @@ sub command_like
 
 sub issues_sql_like
 {
-	my ($cmd, $expected_sql, $test_name) = @_;
-	truncate $test_server_logfile, 0;
+	my ($node, $cmd, $expected_sql, $test_name) = @_;
+	truncate $node->getLogFile(), 0;
 	my $result = run_log($cmd);
 	ok($result, "@$cmd exit code 0");
-	my $log = slurp_file($test_server_logfile);
+	my $log = slurp_file($node->getLogFile());
 	like($log, $expected_sql, "$test_name: SQL found in server log");
 }
 
diff --git a/src/test/ssl/ServerSetup.pm b/src/test/ssl/ServerSetup.pm
index a6c77b5..ccd4754 100644
--- a/src/test/ssl/ServerSetup.pm
+++ b/src/test/ssl/ServerSetup.pm
@@ -18,6 +18,7 @@ package ServerSetup;
 
 use strict;
 use warnings;
+use PostgresNode;
 use TestLib;
 use File::Basename;
 use File::Copy;
@@ -45,7 +46,7 @@ sub copy_files
 
 sub configure_test_server_for_ssl
 {
-	my $tempdir    = $_[0];
+	my $pgdata     = $_[0];
 	my $serverhost = $_[1];
 
 	# Create test users and databases
@@ -55,7 +56,7 @@ sub configure_test_server_for_ssl
 	psql 'postgres', "CREATE DATABASE certdb";
 
 	# enable logging etc.
-	open CONF, ">>$tempdir/pgdata/postgresql.conf";
+	open CONF, ">>$pgdata/postgresql.conf";
 	print CONF "fsync=off\n";
 	print CONF "log_connections=on\n";
 	print CONF "log_hostname=on\n";
@@ -68,17 +69,17 @@ sub configure_test_server_for_ssl
 	close CONF;
 
 # Copy all server certificates and keys, and client root cert, to the data dir
-	copy_files("ssl/server-*.crt", "$tempdir/pgdata");
-	copy_files("ssl/server-*.key", "$tempdir/pgdata");
-	chmod(0600, glob "$tempdir/pgdata/server-*.key") or die $!;
-	copy_files("ssl/root+client_ca.crt", "$tempdir/pgdata");
-	copy_files("ssl/root+client.crl",    "$tempdir/pgdata");
+	copy_files("ssl/server-*.crt", $pgdata);
+	copy_files("ssl/server-*.key", $pgdata);
+	chmod(0600, glob "$pgdata/server-*.key") or die $!;
+	copy_files("ssl/root+client_ca.crt", $pgdata);
+	copy_files("ssl/root+client.crl",    $pgdata);
 
   # Only accept SSL connections from localhost. Our tests don't depend on this
   # but seems best to keep it as narrow as possible for security reasons.
   #
   # When connecting to certdb, also check the client certificate.
-	open HBA, ">$tempdir/pgdata/pg_hba.conf";
+	open HBA, ">$pgdata/pg_hba.conf";
 	print HBA
 "# TYPE  DATABASE        USER            ADDRESS                 METHOD\n";
 	print HBA
@@ -96,12 +97,13 @@ sub configure_test_server_for_ssl
 # the server so that the configuration takes effect.
 sub switch_server_cert
 {
-	my $tempdir  = $_[0];
+	my $node     = $_[0];
 	my $certfile = $_[1];
+	my $pgdata   = $node->getDataDir();
 
 	diag "Restarting server with certfile \"$certfile\"...";
 
-	open SSLCONF, ">$tempdir/pgdata/sslconfig.conf";
+	open SSLCONF, ">$pgdata/sslconfig.conf";
 	print SSLCONF "ssl=on\n";
 	print SSLCONF "ssl_ca_file='root+client_ca.crt'\n";
 	print SSLCONF "ssl_cert_file='$certfile.crt'\n";
@@ -110,5 +112,5 @@ sub switch_server_cert
 	close SSLCONF;
 
 	# Stop and restart server to reload the new config.
-	restart_test_server();
+	$node->restartNode();
 }
diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl
index 0d6f339..6a32af1 100644
--- a/src/test/ssl/t/001_ssltests.pl
+++ b/src/test/ssl/t/001_ssltests.pl
@@ -1,5 +1,7 @@
 use strict;
 use warnings;
+use PostgresNode;
+use TestBase;
 use TestLib;
 use Test::More tests => 38;
 use ServerSetup;
@@ -25,8 +27,6 @@ BEGIN
 # postgresql-ssl-regression.test.
 my $SERVERHOSTADDR = '127.0.0.1';
 
-my $tempdir = TestLib::tempdir;
-
 # Define a couple of helper functions to test connecting to the server.
 
 my $common_connstr;
@@ -74,10 +74,16 @@ chmod 0600, "ssl/client.key";
 
 #### Part 0. Set up the server.
 
-diag "setting up data directory in \"$tempdir\"...";
-start_test_server($tempdir);
-configure_test_server_for_ssl($tempdir, $SERVERHOSTADDR);
-switch_server_cert($tempdir, 'server-cn-only');
+diag "setting up data directory...";
+my $node = get_new_node();
+$node->initNode();
+# PGHOST is enforced here to set up the node, subsequent connections
+# will use a dedicated connection string.
+$ENV{PGHOST} = $node->getHost();
+$ENV{PGPORT} = $node->getPort();
+$node->startNode();
+configure_test_server_for_ssl($node->getDataDir(), $SERVERHOSTADDR);
+switch_server_cert($node, 'server-cn-only');
 
 ### Part 1. Run client-side tests.
 ###
@@ -150,7 +156,7 @@ test_connect_ok("sslmode=verify-ca host=wronghost.test");
 test_connect_fails("sslmode=verify-full host=wronghost.test");
 
 # Test Subject Alternative Names.
-switch_server_cert($tempdir, 'server-multiple-alt-names');
+switch_server_cert($node, 'server-multiple-alt-names');
 
 diag "test hostname matching with X509 Subject Alternative Names";
 $common_connstr =
@@ -165,7 +171,7 @@ test_connect_fails("host=deep.subdomain.wildcard.pg-ssltest.test");
 
 # Test certificate with a single Subject Alternative Name. (this gives a
 # slightly different error message, that's all)
-switch_server_cert($tempdir, 'server-single-alt-name');
+switch_server_cert($node, 'server-single-alt-name');
 
 diag "test hostname matching with a single X509 Subject Alternative Name";
 $common_connstr =
@@ -178,7 +184,7 @@ test_connect_fails("host=deep.subdomain.wildcard.pg-ssltest.test");
 
 # Test server certificate with a CN and SANs. Per RFCs 2818 and 6125, the CN
 # should be ignored when the certificate has both.
-switch_server_cert($tempdir, 'server-cn-and-alt-names');
+switch_server_cert($node, 'server-cn-and-alt-names');
 
 diag "test certificate with both a CN and SANs";
 $common_connstr =
@@ -190,7 +196,7 @@ test_connect_fails("host=common-name.pg-ssltest.test");
 
 # Finally, test a server certificate that has no CN or SANs. Of course, that's
 # not a very sensible certificate, but libpq should handle it gracefully.
-switch_server_cert($tempdir, 'server-no-names');
+switch_server_cert($node, 'server-no-names');
 $common_connstr =
 "user=ssltestuser dbname=trustdb sslcert=invalid sslrootcert=ssl/root+server_ca.crt hostaddr=$SERVERHOSTADDR";
 
@@ -199,7 +205,7 @@ test_connect_fails("sslmode=verify-full host=common-name.pg-ssltest.test");
 
 # Test that the CRL works
 diag "Testing client-side CRL";
-switch_server_cert($tempdir, 'server-revoked');
+switch_server_cert($node, 'server-revoked');
 
 $common_connstr =
 "user=ssltestuser dbname=trustdb sslcert=invalid hostaddr=$SERVERHOSTADDR host=common-name.pg-ssltest.test";
@@ -233,7 +239,3 @@ test_connect_fails(
 test_connect_fails(
 "user=ssltestuser sslcert=ssl/client-revoked.crt sslkey=ssl/client-revoked.key"
 );
-
-
-# All done! Save the log, before the temporary installation is deleted
-copy("$tempdir/client-log", "./client-log");
-- 
2.6.3

From 9a8495d413a3fff8edc76ff84ee905a49262a1b5 Mon Sep 17 00:00:00 2001
From: Michael Paquier <michael@otacoo.com>
Date: Wed, 25 Nov 2015 22:06:17 +0900
Subject: [PATCH 2/2] Add recovery test suite

Using the infrastructure put in place by last commit, this commit adds a
new set of tests dedicated to nodes in recovery, standbys, backup and
more advanced cluster operations. A couple of tests showing how to use
those routines is given as well.
---
 src/bin/pg_rewind/RewindTest.pm             |  32 -----
 src/test/Makefile                           |   2 +-
 src/test/perl/TestLib.pm                    |  32 +++++
 src/test/recovery/.gitignore                |   3 +
 src/test/recovery/Makefile                  |  17 +++
 src/test/recovery/README                    |  19 +++
 src/test/recovery/RecoveryTest.pm           | 179 ++++++++++++++++++++++++++++
 src/test/recovery/t/001_stream_rep.pl       |  67 +++++++++++
 src/test/recovery/t/002_archiving.pl        |  51 ++++++++
 src/test/recovery/t/003_recovery_targets.pl | 135 +++++++++++++++++++++
 src/test/recovery/t/004_timeline_switch.pl  |  76 ++++++++++++
 src/test/recovery/t/005_replay_delay.pl     |  49 ++++++++
 12 files changed, 629 insertions(+), 33 deletions(-)
 create mode 100644 src/test/recovery/.gitignore
 create mode 100644 src/test/recovery/Makefile
 create mode 100644 src/test/recovery/README
 create mode 100644 src/test/recovery/RecoveryTest.pm
 create mode 100644 src/test/recovery/t/001_stream_rep.pl
 create mode 100644 src/test/recovery/t/002_archiving.pl
 create mode 100644 src/test/recovery/t/003_recovery_targets.pl
 create mode 100644 src/test/recovery/t/004_timeline_switch.pl
 create mode 100644 src/test/recovery/t/005_replay_delay.pl

diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm
index 5124df8..660b03f 100644
--- a/src/bin/pg_rewind/RewindTest.pm
+++ b/src/bin/pg_rewind/RewindTest.pm
@@ -119,38 +119,6 @@ sub check_query
 	}
 }
 
-# Run a query once a second, until it returns 't' (i.e. SQL boolean true).
-sub poll_query_until
-{
-	my ($query, $connstr) = @_;
-
-	my $max_attempts = 30;
-	my $attempts     = 0;
-	my ($stdout, $stderr);
-
-	while ($attempts < $max_attempts)
-	{
-		my $cmd = [ 'psql', '-At', '-c', "$query", '-d', "$connstr" ];
-		my $result = run $cmd, '>', \$stdout, '2>', \$stderr;
-
-		chomp($stdout);
-		$stdout =~ s/\r//g if $Config{osname} eq 'msys';
-		if ($stdout eq "t")
-		{
-			return 1;
-		}
-
-		# Wait a second before retrying.
-		sleep 1;
-		$attempts++;
-	}
-
-	# The query result didn't change in 30 seconds. Give up. Print the stderr
-	# from the last attempt, hopefully that's useful for debugging.
-	diag $stderr;
-	return 0;
-}
-
 sub append_to_file
 {
 	my ($filename, $str) = @_;
diff --git a/src/test/Makefile b/src/test/Makefile
index b713c2c..7f7754f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -12,7 +12,7 @@ subdir = src/test
 top_builddir = ../..
 include $(top_builddir)/src/Makefile.global
 
-SUBDIRS = regress isolation modules
+SUBDIRS = regress isolation modules recovery
 
 # We don't build or execute examples/, locale/, or thread/ by default,
 # but we do want "make clean" etc to recurse into them.  Likewise for ssl/,
diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm
index a1473c6..091043e 100644
--- a/src/test/perl/TestLib.pm
+++ b/src/test/perl/TestLib.pm
@@ -8,6 +8,7 @@ use Exporter 'import';
 our @EXPORT = qw(
   get_new_node
   teardown_node
+  poll_query_until
   psql
   slurp_dir
   slurp_file
@@ -110,6 +111,37 @@ sub teardown_node
 	@active_nodes = grep { $_ ne $node } @active_nodes;
 }
 
+sub poll_query_until
+{
+	my ($query, $connstr) = @_;
+
+	my $max_attempts = 30;
+	my $attempts     = 0;
+	my ($stdout, $stderr);
+
+	while ($attempts < $max_attempts)
+	{
+		my $cmd = [ 'psql', '-At', '-c', "$query", '-d', "$connstr" ];
+		my $result = run $cmd, '>', \$stdout, '2>', \$stderr;
+
+		chomp($stdout);
+		$stdout =~ s/\r//g if $Config{osname} eq 'msys';
+		if ($stdout eq "t")
+		{
+			return 1;
+		}
+
+		# Wait a second before retrying.
+		sleep 1;
+		$attempts++;
+	}
+
+	# The query result didn't change in 30 seconds. Give up. Print the stderr
+	# from the last attempt, hopefully that's useful for debugging.
+	diag $stderr;
+	return 0;
+}
+
 END
 {
 	foreach my $node (@active_nodes)
diff --git a/src/test/recovery/.gitignore b/src/test/recovery/.gitignore
new file mode 100644
index 0000000..499fa7d
--- /dev/null
+++ b/src/test/recovery/.gitignore
@@ -0,0 +1,3 @@
+# Generated by test suite
+/regress_log/
+/tmp_check/
diff --git a/src/test/recovery/Makefile b/src/test/recovery/Makefile
new file mode 100644
index 0000000..16c063a
--- /dev/null
+++ b/src/test/recovery/Makefile
@@ -0,0 +1,17 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/recovery
+#
+# Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/recovery/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/recovery
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+	$(prove_check)
diff --git a/src/test/recovery/README b/src/test/recovery/README
new file mode 100644
index 0000000..20b98e0
--- /dev/null
+++ b/src/test/recovery/README
@@ -0,0 +1,19 @@
+src/test/recovery/README
+
+Regression tests for recovery and replication
+=============================================
+
+This directory contains a test suite for recovery and replication,
+testing mainly the interactions of recovery.conf with cluster
+instances by providing a simple set of routines that can be used
+to define a custom cluster for a test, including backup, archiving,
+and streaming configuration.
+
+Running the tests
+=================
+
+    make check
+
+NOTE: This creates a temporary installation, and some tests may
+create one or multiple nodes, be they master or standby(s) for the
+purpose of the tests.
diff --git a/src/test/recovery/RecoveryTest.pm b/src/test/recovery/RecoveryTest.pm
new file mode 100644
index 0000000..2b03233
--- /dev/null
+++ b/src/test/recovery/RecoveryTest.pm
@@ -0,0 +1,179 @@
+# Set of common routines for recovery regression tests for a PostgreSQL
+# cluster. This includes methods that can be used by the various set of
+# tests present to set up cluster nodes and configure them according to
+# the test scenario wanted.
+#
+# This module makes use of PostgresNode for node manipulation, performing
+# higher-level operations to create standby nodes or setting them up
+# for archiving and replication.
+#
+# Nodes are identified by their port number and have one allocated when
+# created, hence it is unique for each node of the cluster as it is run
+# locally. PGHOST is equally set to a unique value for the duration of
+# each test.
+
+package RecoveryTest;
+
+use strict;
+use warnings;
+
+use Cwd;
+use PostgresNode;
+use RecursiveCopy;
+use TestBase;
+use TestLib;
+use Test::More;
+
+use IPC::Run qw(run start);
+
+use Exporter 'import';
+
+our @EXPORT = qw(
+	enable_archiving
+	enable_restoring
+	enable_streaming
+	make_master
+	make_archive_standby
+	make_stream_standby
+);
+
+# Set of handy routines able to set up a node with different characteristics
+# Enable streaming replication
+sub enable_streaming
+{
+	my $node_root = shift; # Instance to link to
+	my $node_standby = shift;
+	my $root_connstr = $node_root->getConnStr();
+	my $applname = $node_standby->getApplName();
+
+	$node_standby->appendConf('recovery.conf', qq(
+primary_conninfo='$root_connstr application_name=$applname'
+standby_mode=on
+recovery_target_timeline='latest'
+));
+}
+
+# Enable the use of restore_command from a node
+sub enable_restoring
+{
+	my $node_root = shift; # Instance to link to
+	my $node_standby = shift;
+	my $path = $node_root->getArchiveDir();
+
+	# Switch path to use slashes on Windows
+	$path =~ tr#\\#/# if ($windows_os);
+	my $copy_command = $windows_os ?
+		"copy \"$path\\\\%f\" \"%p\"" :
+		"cp -i $path/%f %p";
+	$node_standby->appendConf('recovery.conf', qq(
+restore_command='$copy_command'
+standby_mode=on
+));
+}
+
+# Enable WAL archiving on a node
+sub enable_archiving
+{
+	my $node = shift;
+	my $path = $node->getArchiveDir();
+
+	# Switch path to use slashes on Windows
+	$path =~ tr#\\#/# if ($windows_os);
+	my $copy_command = $windows_os ?
+		"copy \"%p\" \"$path\\\\%f\"" :
+		"cp %p $path/%f";
+
+	# Enable archive_mode and archive_command on node
+	$node->appendConf('postgresql.conf', qq(
+archive_mode = on
+archive_command = '$copy_command'
+));
+}
+
+# Master node initialization.
+sub make_master
+{
+	my $node_master = get_new_node();
+	my $port_master = $node_master->getPort();
+	print "# Initializing master node wih port $port_master\n";
+	$node_master->initNode();
+	configure_base_node($node_master);
+	return $node_master;
+}
+
+sub configure_base_node
+{
+	my $node = shift;
+
+	$node->appendConf('postgresql.conf', qq(
+wal_level = hot_standby
+max_wal_senders = 5
+wal_keep_segments = 20
+max_wal_size = 128MB
+shared_buffers = 1MB
+wal_log_hints = on
+hot_standby = on
+autovacuum = off
+));
+}
+
+# Standby node initializations
+# Node only streaming.
+sub make_stream_standby
+{
+	my $node_master = shift;
+	my $backup_name = shift;
+	my $node_standby = get_new_node();
+	my $master_port = $node_master->getPort();
+	my $standby_port = $node_standby->getPort();
+
+	print "# Initializing streaming mode for node $standby_port from node $master_port\n";
+	$node_standby->initNodeFromBackup($node_master, $backup_name);
+	configure_base_node($node_standby);
+
+	# Start second node, streaming from first one
+	enable_streaming($node_master, $node_standby);
+	return $node_standby;
+}
+
+# Node getting WAL only from archives
+sub make_archive_standby
+{
+	my $node_master = shift;
+	my $backup_name = shift;
+	my $node_standby = get_new_node();
+	my $master_port = $node_master->getPort();
+	my $standby_port = $node_standby->getPort();
+
+	print "# Initializing archive mode for node $standby_port from node $master_port\n";
+	$node_standby->initNodeFromBackup($node_master, $backup_name);
+	configure_base_node($node_standby);
+
+	# Start second node, restoring from first one
+	enable_restoring($node_master, $node_standby);
+	return $node_standby;
+}
+
+# Wait until a node is able to accept queries. Useful when putting a node
+# in recovery and wait for it to be able to work particularly on slow
+# machines.
+sub wait_for_node
+{
+	my $node         = shift;
+	my $max_attempts = 30;
+	my $attempts     = 0;
+	while ($attempts < $max_attempts)
+	{
+		if (run_log(['pg_isready', '-p', $node->getPort()]))
+		{
+			return 1;
+		}
+
+		# Wait a second before retrying.
+		sleep 1;
+		$attempts++;
+	}
+	return 0;
+}
+
+1;
diff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl
new file mode 100644
index 0000000..aae1026
--- /dev/null
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -0,0 +1,67 @@
+# Minimal test testing streaming replication
+use strict;
+use warnings;
+use TestLib;
+use Test::More tests => 4;
+
+use RecoveryTest;
+
+# Initialize master node
+my $node_master = make_master();
+$node_master->startNode();
+my $backup_name = 'my_backup';
+
+# Take backup
+$node_master->backupNode($backup_name);
+
+# Create streaming standby linking to master
+my $node_standby_1 = make_stream_standby($node_master, $backup_name);
+$node_standby_1->startNode();
+
+# Take backup of standby 1 (not mandatory, but useful to check if
+# pg_basebackup works on a standby).
+$node_standby_1->backupNode($backup_name);
+
+# Create second standby node linking to standby 1
+my $node_standby_2 = make_stream_standby($node_standby_1, $backup_name);
+$node_standby_2->startNode();
+$node_standby_2->backupNode($backup_name);
+
+# Create some content on master and check its presence in standby 1 an
+psql $node_master->getConnStr(),
+	"CREATE TABLE tab_int AS SELECT generate_series(1,1002) AS a";
+
+# Wait for standbys to catch up
+my $applname_1 = $node_standby_1->getApplName();
+my $applname_2 = $node_standby_2->getApplName();
+my $caughtup_query = "SELECT pg_current_xlog_location() = write_location FROM pg_stat_replication WHERE application_name = '$applname_1';";
+poll_query_until($caughtup_query, $node_master->getConnStr())
+	or die "Timed out while waiting for standby 1 to catch up";
+$caughtup_query = "SELECT pg_last_xlog_replay_location() = write_location FROM pg_stat_replication WHERE application_name = '$applname_2';";
+poll_query_until($caughtup_query, $node_standby_1->getConnStr())
+	or die "Timed out while waiting for standby 2 to catch up";
+
+my $result = psql $node_standby_1->getConnStr(),
+	"SELECT count(*) FROM tab_int";
+print "standby 1: $result\n";
+is($result, qq(1002), 'check streamed content on standby 1');
+
+$result = psql $node_standby_2->getConnStr(),
+	"SELECT count(*) FROM tab_int";
+print "standby 2: $result\n";
+is($result, qq(1002), 'check streamed content on standby 2');
+
+# Check that only READ-only queries can run on standbys
+command_fails(['psql', '-A', '-t',  '--no-psqlrc',
+	'-d', $node_standby_1->getConnStr(), '-c',
+    "INSERT INTO tab_int VALUES (1)"],
+	'Read-only queries on standby 1');
+command_fails(['psql', '-A', '-t',  '--no-psqlrc',
+	'-d', $node_standby_1->getConnStr(), '-c',
+    "INSERT INTO tab_int VALUES (1)"],
+	'Read-only queries on standby 2');
+
+# Cleanup nodes
+teardown_node($node_standby_2);
+teardown_node($node_standby_1);
+teardown_node($node_master);
diff --git a/src/test/recovery/t/002_archiving.pl b/src/test/recovery/t/002_archiving.pl
new file mode 100644
index 0000000..c3e8465
--- /dev/null
+++ b/src/test/recovery/t/002_archiving.pl
@@ -0,0 +1,51 @@
+# test for archiving with warm standby
+use strict;
+use warnings;
+use TestLib;
+use Test::More tests => 1;
+use File::Copy;
+use RecoveryTest;
+
+# Initialize master node, doing archives
+my $node_master = make_master();
+my $backup_name = 'my_backup';
+enable_archiving($node_master);
+
+# Start it
+$node_master->startNode();
+
+# Take backup for slave
+$node_master->backupNode($backup_name);
+
+# Initialize standby node from backup, fetching WAL from archives
+my $node_standby = make_archive_standby($node_master, $backup_name);
+$node_standby->appendConf('postgresql.conf', qq(
+wal_retrieve_retry_interval = '100ms'
+));
+$node_standby->startNode();
+
+# Create some content on master
+psql $node_master->getConnStr(),
+	"CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a";
+my $current_lsn = psql $node_master->getConnStr(),
+	"SELECT pg_current_xlog_location();";
+
+# Force archiving of WAL file to make it present on master
+psql $node_master->getConnStr(), "SELECT pg_switch_xlog()";
+
+# Add some more content, it should not be present on standby
+psql $node_master->getConnStr(),
+	"INSERT INTO tab_int VALUES (generate_series(1001,2000))";
+
+# Wait until necessary replay has been done on standby
+my $caughtup_query = "SELECT '$current_lsn'::pg_lsn <= pg_last_xlog_replay_location()";
+poll_query_until($caughtup_query, $node_standby->getConnStr())
+	or die "Timed out while waiting for standby to catch up";
+
+my $result = psql $node_standby->getConnStr(),
+	"SELECT count(*) FROM tab_int";
+is($result, qq(1000), 'check content from archives');
+
+# Cleanup nodes
+teardown_node($node_standby);
+teardown_node($node_master);
diff --git a/src/test/recovery/t/003_recovery_targets.pl b/src/test/recovery/t/003_recovery_targets.pl
new file mode 100644
index 0000000..995e8e4
--- /dev/null
+++ b/src/test/recovery/t/003_recovery_targets.pl
@@ -0,0 +1,135 @@
+# Test for recovery targets: name, timestamp, XID
+use strict;
+use warnings;
+use TestLib;
+use Test::More tests => 7;
+
+use RecoveryTest;
+
+# Create and test a standby from given backup, with a certain
+# recovery target.
+sub test_recovery_standby
+{
+	my $test_name = shift;
+	my $node_master = shift;
+	my $recovery_params = shift;
+	my $num_rows = shift;
+	my $until_lsn = shift;
+
+	my $node_standby = make_archive_standby($node_master, 'my_backup');
+
+	foreach my $param_item (@$recovery_params)
+	{
+		$node_standby->appendConf('recovery.conf',
+					   qq($param_item
+));
+	}
+
+	$node_standby->startNode();
+
+	# Wait until standby has replayed enough data
+	my $caughtup_query = "SELECT '$until_lsn'::pg_lsn <= pg_last_xlog_replay_location()";
+	poll_query_until($caughtup_query, $node_standby->getConnStr())
+		or die "Timed out while waiting for standby to catch up";
+
+	# Create some content on master and check its presence in standby
+	my $result = psql $node_standby->getConnStr(),
+		"SELECT count(*) FROM tab_int";
+	is($result, qq($num_rows), "check standby content for $test_name");
+
+	# Stop standby node
+	teardown_node($node_standby);
+}
+
+# Initialize master node
+my $node_master = make_master();
+enable_archiving($node_master);
+
+# Start it
+$node_master->startNode();
+
+# Create data before taking the backup, aimed at testing
+# recovery_target = 'immediate'
+psql $node_master->getConnStr(),
+	"CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a";
+my $lsn1 = psql $node_master->getConnStr(),
+	"SELECT pg_current_xlog_location();";
+
+# Take backup from which all operations will be run
+$node_master->backupNode('my_backup');
+
+# Insert some data with used as a replay reference, with a recovery
+# target TXID.
+psql $node_master->getConnStr(),
+	"INSERT INTO tab_int VALUES (generate_series(1001,2000))";
+my $recovery_txid = psql $node_master->getConnStr(),
+	"SELECT txid_current()";
+my $lsn2 = psql $node_master->getConnStr(),
+	"SELECT pg_current_xlog_location();";
+
+# More data, with recovery target timestamp
+psql $node_master->getConnStr(),
+	"INSERT INTO tab_int VALUES (generate_series(2001,3000))";
+my $recovery_time = psql $node_master->getConnStr(), "SELECT now()";
+my $lsn3 = psql $node_master->getConnStr(),
+	"SELECT pg_current_xlog_location();";
+
+# Even more data, this time with a recovery target name
+psql $node_master->getConnStr(),
+	"INSERT INTO tab_int VALUES (generate_series(3001,4000))";
+my $recovery_name = "my_target";
+my $lsn4 = psql $node_master->getConnStr(),
+	"SELECT pg_current_xlog_location();";
+psql $node_master->getConnStr(),
+	"SELECT pg_create_restore_point('$recovery_name')";
+
+# Force archiving of WAL file
+psql $node_master->getConnStr(), "SELECT pg_switch_xlog()";
+
+# Test recovery targets
+my @recovery_params = ( "recovery_target = 'immediate'" );
+test_recovery_standby('immediate target', $node_master,
+					  \@recovery_params,
+					  "1000", $lsn1);
+@recovery_params = ( "recovery_target_xid = '$recovery_txid'" );
+test_recovery_standby('XID', $node_master,
+					  \@recovery_params,
+					  "2000", $lsn2);
+@recovery_params = ( "recovery_target_time = '$recovery_time'" );
+test_recovery_standby('Time', $node_master,
+					  \@recovery_params,
+					  "3000", $lsn3);
+@recovery_params = ( "recovery_target_name = '$recovery_name'" );
+test_recovery_standby('Name', $node_master,
+					  \@recovery_params,
+					  "4000", $lsn4);
+
+# Multiple targets
+# Last entry has priority (note that an array respects the order of items
+# not hashes).
+@recovery_params = (
+	"recovery_target_name = '$recovery_name'",
+	"recovery_target_xid  = '$recovery_txid'",
+	"recovery_target_time = '$recovery_time'"
+);
+test_recovery_standby('Name + XID + Time', $node_master,
+					  \@recovery_params,
+					  "3000", $lsn3);
+@recovery_params = (
+	"recovery_target_time = '$recovery_time'",
+	"recovery_target_name = '$recovery_name'",
+	"recovery_target_xid  = '$recovery_txid'"
+);
+test_recovery_standby('Time + Name + XID', $node_master,
+					  \@recovery_params,
+					  "2000", $lsn2);
+@recovery_params = (
+	"recovery_target_xid  = '$recovery_txid'",
+	"recovery_target_time = '$recovery_time'",
+	"recovery_target_name = '$recovery_name'"
+);
+test_recovery_standby('XID + Time + Name', $node_master,
+					  \@recovery_params,
+					  "4000", $lsn4);
+
+teardown_node($node_master);
diff --git a/src/test/recovery/t/004_timeline_switch.pl b/src/test/recovery/t/004_timeline_switch.pl
new file mode 100644
index 0000000..c78c92a
--- /dev/null
+++ b/src/test/recovery/t/004_timeline_switch.pl
@@ -0,0 +1,76 @@
+# Tets for timeline switch
+# Encure that a standby is able to follow a newly-promoted standby
+# on a new timeline.
+use strict;
+use warnings;
+use File::Path qw(remove_tree);
+use PostgresNode;
+use TestBase;
+use TestLib;
+use Test::More tests => 1;
+
+use RecoveryTest;
+
+$ENV{PGDATABASE} = 'postgres';
+
+# Initialize master node
+my $node_master = make_master();
+$node_master->startNode();
+
+# Take backup
+my $backup_name = 'my_backup';
+$node_master->backupNode($backup_name);
+
+# Create two standbys linking to it
+my $node_standby_1 = make_stream_standby($node_master, $backup_name);
+$node_standby_1->startNode();
+my $node_standby_2 = make_stream_standby($node_master, $backup_name);
+$node_standby_2->startNode();
+
+# Create some content on master
+psql $node_master->getConnStr(),
+	"CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a";
+my $until_lsn = psql $node_master->getConnStr(),
+	"SELECT pg_current_xlog_location();";
+
+# Wait until standby has replayed enough data on standby 1
+my $caughtup_query = "SELECT '$until_lsn'::pg_lsn <= pg_last_xlog_replay_location()";
+poll_query_until($caughtup_query, $node_standby_1->getConnStr())
+	or die "Timed out while waiting for standby to catch up";
+
+# Stop and remove master, and promote standby 1, switching it to a new timeline
+teardown_node($node_master);
+system_or_bail('pg_ctl', '-w', '-D', $node_standby_1->getDataDir(),
+			   'promote');
+print "# Promoted standby 1\n";
+
+# Switch standby 2 to replay from standby 1
+remove_tree($node_standby_2->getDataDir() . '/recovery.conf');
+my $connstr_1 = $node_standby_1->getConnStr();
+$node_standby_2->appendConf('recovery.conf', qq(
+primary_conninfo='$connstr_1'
+standby_mode=on
+recovery_target_timeline='latest'
+));
+$node_standby_2->restartNode();
+
+# Insert some data in standby 1 and check its presence in standby 2
+# to ensure that the timeline switch has been done. Standby 1 needs
+# to exit recovery first before moving on with the test.
+poll_query_until("SELECT pg_is_in_recovery() <> true",
+				 $node_standby_1->getConnStr());
+psql $node_standby_1->getConnStr(),
+	"INSERT INTO tab_int VALUES (generate_series(1001,2000))";
+$until_lsn = psql $node_standby_1->getConnStr(),
+	"SELECT pg_current_xlog_location();";
+$caughtup_query = "SELECT '$until_lsn'::pg_lsn <= pg_last_xlog_replay_location()";
+poll_query_until($caughtup_query, $node_standby_2->getConnStr())
+	or die "Timed out while waiting for standby to catch up";
+
+my $result = psql $node_standby_2->getConnStr(),
+	"SELECT count(*) FROM tab_int";
+is($result, qq(2000), 'check content of standby 2');
+
+# Stop nodes
+teardown_node($node_standby_2);
+teardown_node($node_standby_1);
diff --git a/src/test/recovery/t/005_replay_delay.pl b/src/test/recovery/t/005_replay_delay.pl
new file mode 100644
index 0000000..c209b55
--- /dev/null
+++ b/src/test/recovery/t/005_replay_delay.pl
@@ -0,0 +1,49 @@
+# Checks for recovery_min_apply_delay
+use strict;
+use warnings;
+use TestLib;
+use Test::More tests => 2;
+
+use RecoveryTest;
+
+# Initialize master node
+my $node_master = make_master();
+$node_master->startNode();
+
+# And some content
+psql $node_master->getConnStr(),
+	"CREATE TABLE tab_int AS SELECT generate_series(1,10) AS a";
+
+# Take backup
+my $backup_name = 'my_backup';
+$node_master->backupNode($backup_name);
+
+# Create streaming standby from backup
+my $node_standby = make_stream_standby($node_master, $backup_name);
+$node_standby->appendConf('recovery.conf', qq(
+recovery_min_apply_delay = '2s'
+));
+$node_standby->startNode();
+
+# Make new content on master and check its presence in standby
+# depending on the delay of 2s applied above.
+psql $node_master->getConnStr(),
+	"INSERT INTO tab_int VALUES (generate_series(11,20))";
+sleep 1;
+# Here we should have only 10 rows
+my $result = psql $node_standby->getConnStr(),
+	"SELECT count(*) FROM tab_int";
+is($result, qq(10), 'check content with delay of 1s');
+
+# Now wait for replay to complete on standby
+my $until_lsn = psql $node_master->getConnStr(),
+	"SELECT pg_current_xlog_location();";
+my $caughtup_query = "SELECT '$until_lsn'::pg_lsn <= pg_last_xlog_replay_location()";
+poll_query_until($caughtup_query, $node_standby->getConnStr())
+	or die "Timed out while waiting for standby to catch up";
+$result = psql $node_standby->getConnStr(), "SELECT count(*) FROM tab_int";
+is($result, qq(20), 'check content with delay of 2s');
+
+# Stop nodes
+teardown_node($node_standby);
+teardown_node($node_master);
-- 
2.6.3

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to