Hi,

As some of you might have seen when running CI, cirrus-ci is restricting how
much CI cycles everyone can use for free (announcement at [1]). This takes
effect September 1st.

This obviously has consequences both for individual users of CI as well as
cfbot.


The first thing I think we should do is to lower the cost of CI. One thing I
had not entirely realized previously, is that macos CI is by far the most
expensive CI to provide. That's not just the case with cirrus-ci, but also
with other providers.  See the series of patches described later in the email.


To me, the situation for cfbot is different than the one for individual
users.

IMO, for the individual user case it's important to use CI for "free", without
a whole lot of complexity. Which imo rules approaches like providing
$cloud_provider compute accounts, that's too much setup work.  With the
improvements detailed below, cirrus' free CI would last about ~65 runs /
month.

For cfbot I hope we can find funding to pay for compute to use for CI. The, by
far, most expensive bit is macos. To a significant degree due to macos
licensing terms not allowing more than 2 VMs on a physical host :(.


The reason we chose cirrus-ci were

a) Ability to use full VMs, rather than a pre-selected set of VMs, which
   allows us to test a larger number

b) Ability to link to log files, without requiring an account. E.g. github
   actions doesn't allow to view logs unless logged in.

c) Amount of compute available.


The set of free CI providers has shrunk since we chose cirrus, as have the
"free" resources provided. I started, quite incomplete as of now, wiki page at
[4].


Potential paths forward for individual CI:

- migrate wholesale to another CI provider

- split CI tasks across different CI providers, rely on github et al
  displaying the CI status for different platforms

- give up


Potential paths forward for cfbot, in addition to the above:

- Pay for compute / ask the various cloud providers to grant us compute
  credits. At least some of the cloud providers can be used via cirrus-ci.

- Host (some) CI runners ourselves. Particularly with macos and windows, that
  could provide significant savings.

- Build our own system, using buildbot, jenkins or whatnot.


Opinions as to what to do?



The attached series of patches:

1) Makes startup of macos instances faster, using more efficient caching of
   the required packages. Also submitted as [2].

2) Introduces a template initdb that's reused during the tests. Also submitted
   as [3]

3) Remove use of -DRANDOMIZE_ALLOCATED_MEMORY from macos tasks. It's
   expensive. And CI also uses asan on linux, so I don't think it's really
   needed.

4) Switch tasks to use debugoptimized builds. Previously many tasks used -Og,
   to get decent backtraces etc. But the amount of CPU burned that way is too
   large. One issue with that is that use of ccache becomes much more crucial,
   uncached build times do significantly increase.

5) Move use of -Dsegsize_blocks=6 from macos to linux

   Macos is expensive, -Dsegsize_blocks=6 slows things down. Alternatively we
   could stop covering both meson and autoconf segsize_blocks. It does affect
   runtime on linux as well.

6) Disable write cache flushes on windows

   It's a bit ugly to do this without using the UI... Shaves off about 30s
   from the tests.

7) pg_regress only checked once a second whether postgres started up, but it's
   usually much faster. Use pg_ctl's logic.  It might be worth replacing the
   use psql with directly using libpq in pg_regress instead, looks like the
   overhead of repeatedly starting psql is noticeable.


FWIW: with the patches applied, the "credit costs" in cirrus CI are roughly
like the following (depends on caching etc):

task costs in credits
        linux-sanity: 0.01
        linux-compiler-warnings: 0.05
        linux-meson: 0.07
        freebsd   : 0.08
        linux-autoconf: 0.09
        windows   : 0.18
        macos     : 0.28
total task runtime is 40.8
cost in credits is 0.76, monthly credits of 50 allow approx 66.10 runs/month


Greetings,

Andres Freund

[1] https://cirrus-ci.org/blog/2023/07/17/limiting-free-usage-of-cirrus-ci/
[2] 
https://www.postgresql.org/message-id/20230805202539.r3umyamsnctysdc7%40awork3.anarazel.de
[3] https://postgr.es/m/20220120021859.3zpsfqn4z7ob7...@alap3.anarazel.de
>From 00c48a90f3acc6cba6af873b29429ee6d4ba38a6 Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Thu, 3 Aug 2023 23:29:13 -0700
Subject: [PATCH v1 1/9] ci: macos: used cached macports install

This substantially speeds up the mac CI time.

Discussion: https://postgr.es/m/20230805202539.r3umyamsnctys...@awork3.anarazel.de
---
 .cirrus.yml                          | 63 +++++++-----------
 src/tools/ci/ci_macports_packages.sh | 97 ++++++++++++++++++++++++++++
 2 files changed, 122 insertions(+), 38 deletions(-)
 create mode 100755 src/tools/ci/ci_macports_packages.sh

diff --git a/.cirrus.yml b/.cirrus.yml
index d260f15c4e2..e9cfc542cfe 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -429,8 +429,7 @@ task:
 
     CIRRUS_WORKING_DIR: ${HOME}/pgsql/
     CCACHE_DIR: ${HOME}/ccache
-    HOMEBREW_CACHE: ${HOME}/homebrew-cache
-    PERL5LIB: ${HOME}/perl5/lib/perl5
+    MACPORTS_CACHE: ${HOME}/macports-cache
 
     CC: ccache cc
     CXX: ccache c++
@@ -454,55 +453,43 @@ task:
     - mkdir ${HOME}/cores
     - sudo sysctl kern.corefile="${HOME}/cores/core.%P"
 
-  perl_cache:
-    folder: ~/perl5
-  cpan_install_script:
-    - perl -mIPC::Run -e 1 || cpan -T IPC::Run
-    - perl -mIO::Pty -e 1 || cpan -T IO::Pty
-  upload_caches: perl
-
-
-  # XXX: Could we instead install homebrew into a cached directory? The
-  # homebrew installation takes a good bit of time every time, even if the
-  # packages do not need to be downloaded.
-  homebrew_cache:
-    folder: $HOMEBREW_CACHE
+  # Use macports, even though homebrew is installed. The installation
+  # of the additional packages we need would take quite a while with
+  # homebrew, even if we cache the downloads. We can't cache all of
+  # homebrew, because it's already large. So we use macports. To cache
+  # the installation we create a .dmg file that we mount if it already
+  # exists.
+  # XXX: The reason for the direct p5.34* references is that we'd need
+  # the large macport tree around to figure out that p5-io-tty is
+  # actually p5.34-io-tty. Using the unversioned name works, but
+  # updates macports every time.
+  macports_cache:
+    folder: ${MACPORTS_CACHE}
   setup_additional_packages_script: |
-    brew install \
+    sh src/tools/ci/ci_macports_packages.sh \
       ccache \
-      icu4c \
-      krb5 \
-      llvm \
+      icu \
+      kerberos5 \
       lz4 \
-      make \
       meson \
       openldap \
       openssl \
-      python \
-      tcl-tk \
+      p5.34-io-tty \
+      p5.34-ipc-run \
+      tcl \
       zstd
-
-    brew cleanup -s # to reduce cache size
-  upload_caches: homebrew
+    # Make macports install visible for subsequent steps
+    echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
+  upload_caches: macports
 
   ccache_cache:
     folder: $CCACHE_DIR
   configure_script: |
-    brewpath="/opt/homebrew"
-    PKG_CONFIG_PATH="${brewpath}/lib/pkgconfig:${PKG_CONFIG_PATH}"
-
-    for pkg in icu4c krb5 openldap openssl zstd ; do
-      pkgpath="${brewpath}/opt/${pkg}"
-      PKG_CONFIG_PATH="${pkgpath}/lib/pkgconfig:${PKG_CONFIG_PATH}"
-      PATH="${pkgpath}/bin:${pkgpath}/sbin:$PATH"
-    done
-
-    export PKG_CONFIG_PATH PATH
-
+    export PKG_CONFIG_PATH="/opt/local/lib/pkgconfig/"
     meson setup \
       --buildtype=debug \
-      -Dextra_include_dirs=${brewpath}/include \
-      -Dextra_lib_dirs=${brewpath}/lib \
+      -Dextra_include_dirs=/opt/local/include \
+      -Dextra_lib_dirs=/opt/local/lib \
       -Dcassert=true \
       -Duuid=e2fs -Ddtrace=auto \
       -Dsegsize_blocks=6 \
diff --git a/src/tools/ci/ci_macports_packages.sh b/src/tools/ci/ci_macports_packages.sh
new file mode 100755
index 00000000000..5f5d3027760
--- /dev/null
+++ b/src/tools/ci/ci_macports_packages.sh
@@ -0,0 +1,97 @@
+#!/bin/sh
+
+# Installs the passed in packages via macports. To make it fast enough
+# for CI, cache the installation as a .dmg file.  To avoid
+# unnecessarily updating the cache, the cached image is only modified
+# when packages are installed or removed.  Any package this script is
+# not instructed to install, will be removed again.
+#
+# This currently expects to be run in a macos cirrus-ci environment.
+
+set -e
+set -x
+
+packages="$@"
+
+macports_url="https://github.com/macports/macports-base/releases/download/v2.8.1/MacPorts-2.8.1-13-Ventura.pkg";
+cache_dmg="macports.hfs.dmg"
+
+if [ "$CIRRUS_CI" != "true" ]; then
+    echo "expect to be called within cirrus-ci" 1>2
+    exit 1
+fi
+
+sudo mkdir -p /opt/local
+mkdir -p ${MACPORTS_CACHE}/
+
+# If we are starting from clean cache, perform a fresh macports
+# install. Otherwise decompress the .dmg we created previously.
+#
+# After this we have a working macports installation, with an unknown set of
+# packages installed.
+new_install=0
+update_cached_image=0
+if [ -e ${MACPORTS_CACHE}/${cache_dmg}.zstd ]; then
+    time zstd -T0 -d ${MACPORTS_CACHE}/${cache_dmg}.zstd -o ${cache_dmg}
+    time sudo hdiutil attach -kernel ${cache_dmg} -owners on -shadow ${cache_dmg}.shadow -mountpoint /opt/local
+else
+    new_install=1
+    curl -fsSL -o macports.pkg "$macports_url"
+    time sudo installer -pkg macports.pkg -target /
+    # this is a throwaway environment, and it'd be a few lines to gin
+    # up a correct user / group when using the cache.
+    echo macportsuser root | sudo tee -a /opt/local/etc/macports/macports.conf
+fi
+export PATH=/opt/local/sbin/:/opt/local/bin/:$PATH
+
+# mark all installed packages unrequested, that allows us to detect
+# packages that aren't needed anymore
+if [ -n "$(port -q installed installed)" ] ; then
+    sudo port unsetrequested installed
+fi
+
+# if setting all the required packages as requested fails, we need
+# to install at least one of them
+if ! sudo port setrequested $packages > /dev/null 2>&1 ; then
+    echo not all required packages installed, doing so now
+    update_cached_image=1
+    # to keep the image small, we deleted the ports tree from the image...
+    sudo port selfupdate
+    # XXX likely we'll need some other way to force an upgrade at some
+    # point...
+    sudo port upgrade outdated
+    sudo port install -N $packages
+    sudo port setrequested $packages
+fi
+
+# check if any ports should be uninstalled
+if [ -n "$(port -q installed rleaves)" ] ; then
+    echo superflous packages installed
+    update_cached_image=1
+    sudo port uninstall --follow-dependencies rleaves
+
+    # remove prior cache contents, don't want to increase size
+    rm -f ${MACPORTS_CACHE}/*
+fi
+
+# Shrink installation if we created / modified it
+if [ "$new_install" -eq 1 -o "$update_cached_image" -eq 1 ]; then
+    sudo /opt/local/bin/port clean --all installed
+    sudo rm -rf /opt/local/var/macports/{software,sources}/*
+fi
+
+# If we're starting from a clean cache, start a new image. If we have
+# an image, but the contents changed, update the image in the cache
+# location.
+if [ "$new_install" -eq 1 ]; then
+    # use a generous size, so additional software can be installed later
+    time sudo hdiutil create -fs HFS+ -format UDRO -size 10g -layout NONE -srcfolder /opt/local/ ${cache_dmg}
+    time zstd -T -10 -z ${cache_dmg} -o ${MACPORTS_CACHE}/${cache_dmg}.zstd
+elif [ "$update_cached_image" -eq 1 ]; then
+    sudo hdiutil detach /opt/local/
+    time hdiutil convert -format UDRO ${cache_dmg} -shadow ${cache_dmg}.shadow -o updated.hfs.dmg
+    rm ${cache_dmg}.shadow
+    mv updated.hfs.dmg ${cache_dmg}
+    time zstd -T -10 -z ${cache_dmg} -o ${MACPORTS_CACHE}/${cache_dmg}.zstd
+    time sudo hdiutil attach -kernel ${cache_dmg} -owners on -shadow ${cache_dmg}.shadow -mountpoint /opt/local
+fi
-- 
2.38.0

>From b050364589df42faf58bc21820fa1593cb0bff5b Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Thu, 2 Feb 2023 21:51:53 -0800
Subject: [PATCH v1 2/9] Use "template" initdb in tests

Discussion: https://postgr.es/m/20220120021859.3zpsfqn4z7ob7...@alap3.anarazel.de
---
 meson.build                              | 30 ++++++++++
 .cirrus.yml                              |  3 +-
 src/test/perl/PostgreSQL/Test/Cluster.pm | 46 ++++++++++++++-
 src/test/regress/pg_regress.c            | 74 ++++++++++++++++++------
 src/Makefile.global.in                   | 52 +++++++++--------
 5 files changed, 161 insertions(+), 44 deletions(-)

diff --git a/meson.build b/meson.build
index 0a11efc97a1..0e8a72b12b9 100644
--- a/meson.build
+++ b/meson.build
@@ -3048,8 +3048,10 @@ testport = 40000
 test_env = environment()
 
 temp_install_bindir = test_install_location / get_option('bindir')
+test_initdb_template = meson.build_root() / 'tmp_install' / 'initdb-template'
 test_env.set('PG_REGRESS', pg_regress.full_path())
 test_env.set('REGRESS_SHLIB', regress_module.full_path())
+test_env.set('INITDB_TEMPLATE', test_initdb_template)
 
 # Test suites that are not safe by default but can be run if selected
 # by the user via the whitespace-separated list in variable PG_TEST_EXTRA.
@@ -3064,6 +3066,34 @@ if library_path_var != ''
 endif
 
 
+# Create (and remove old) initdb template directory. Tests use that, where
+# possible, to make it cheaper to run tests.
+#
+# Use python to remove the old cached initdb, as we cannot rely on a working
+# 'rm' binary on windows.
+test('initdb_cache',
+     python,
+     args: [
+       '-c', '''
+import shutil
+import sys
+import subprocess
+
+shutil.rmtree(sys.argv[1], ignore_errors=True)
+sp = subprocess.run(sys.argv[2:] + [sys.argv[1]])
+sys.exit(sp.returncode)
+''',
+       test_initdb_template,
+       temp_install_bindir / 'initdb',
+       '-A', 'trust', '-N', '--no-instructions'
+     ],
+     priority: setup_tests_priority - 1,
+     timeout: 300,
+     is_parallel: false,
+     env: test_env,
+     suite: ['setup'])
+
+
 
 ###############################################################
 # Test Generation
diff --git a/.cirrus.yml b/.cirrus.yml
index e9cfc542cfe..f08da65ed76 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -115,8 +115,9 @@ task:
   test_minimal_script: |
     su postgres <<-EOF
       ulimit -c unlimited
+      meson test $MTEST_ARGS --suite setup
       meson test $MTEST_ARGS --num-processes ${TEST_JOBS} \
-        tmp_install cube/regress pg_ctl/001_start_stop
+        cube/regress pg_ctl/001_start_stop
     EOF
 
   on_failure:
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 5e161dbee60..4d449c35de9 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -522,8 +522,50 @@ sub init
 	mkdir $self->backup_dir;
 	mkdir $self->archive_dir;
 
-	PostgreSQL::Test::Utils::system_or_bail('initdb', '-D', $pgdata, '-A',
-		'trust', '-N', @{ $params{extra} });
+	# If available and if there aren't any parameters, use a previously
+	# initdb'd cluster as a template by copying it. For a lot of tests, that's
+	# substantially cheaper. Do so only if there aren't parameters, it doesn't
+	# seem worth figuring out whether they affect compatibility.
+	#
+	# There's very similar code in pg_regress.c, but we can't easily
+	# deduplicate it until we require perl at build time.
+	if (defined $params{extra} or !defined $ENV{INITDB_TEMPLATE})
+	{
+		note("initializing database system by running initdb");
+		PostgreSQL::Test::Utils::system_or_bail('initdb', '-D', $pgdata, '-A',
+			'trust', '-N', @{ $params{extra} });
+	}
+	else
+	{
+		my @copycmd;
+		my $expected_exitcode;
+
+		note("initializing database system by copying initdb template");
+
+		if ($PostgreSQL::Test::Utils::windows_os)
+		{
+			@copycmd = qw(robocopy /E /NJS /NJH /NFL /NDL /NP);
+			$expected_exitcode = 1;    # 1 denotes files were copied
+		}
+		else
+		{
+			@copycmd = qw(cp -a);
+			$expected_exitcode = 0;
+		}
+
+		@copycmd = (@copycmd, $ENV{INITDB_TEMPLATE}, $pgdata);
+
+		my $ret = PostgreSQL::Test::Utils::system_log(@copycmd);
+
+		# See http://perldoc.perl.org/perlvar.html#%24CHILD_ERROR
+		if ($ret & 127 or $ret >> 8 != $expected_exitcode)
+		{
+			BAIL_OUT(
+				sprintf("failed to execute command \"%s\": $ret",
+					join(" ", @copycmd)));
+		}
+	}
+
 	PostgreSQL::Test::Utils::system_or_bail($ENV{PG_REGRESS},
 		'--config-auth', $pgdata, @{ $params{auth_extra} });
 
diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c
index b68632320a7..407e3915cec 100644
--- a/src/test/regress/pg_regress.c
+++ b/src/test/regress/pg_regress.c
@@ -2295,6 +2295,7 @@ regression_main(int argc, char *argv[],
 		FILE	   *pg_conf;
 		const char *env_wait;
 		int			wait_seconds;
+		const char *initdb_template_dir;
 
 		/*
 		 * Prepare the temp instance
@@ -2316,25 +2317,64 @@ regression_main(int argc, char *argv[],
 		if (!directory_exists(buf))
 			make_directory(buf);
 
-		/* initdb */
 		initStringInfo(&cmd);
-		appendStringInfo(&cmd,
-						 "\"%s%sinitdb\" -D \"%s/data\" --no-clean --no-sync",
-						 bindir ? bindir : "",
-						 bindir ? "/" : "",
-						 temp_instance);
-		if (debug)
-			appendStringInfo(&cmd, " --debug");
-		if (nolocale)
-			appendStringInfo(&cmd, " --no-locale");
-		appendStringInfo(&cmd, " > \"%s/log/initdb.log\" 2>&1", outputdir);
-		fflush(NULL);
-		if (system(cmd.data))
+
+		/*
+		 * Create data directory.
+		 *
+		 * If available, use a previously initdb'd cluster as a template by
+		 * copying it. For a lot of tests, that's substantially cheaper.
+		 *
+		 * There's very similar code in Cluster.pm, but we can't easily de
+		 * duplicate it until we require perl at build time.
+		 */
+		initdb_template_dir = getenv("INITDB_TEMPLATE");
+		if (initdb_template_dir == NULL || nolocale || debug)
 		{
-			bail("initdb failed\n"
-				 "# Examine \"%s/log/initdb.log\" for the reason.\n"
-				 "# Command was: %s",
-				 outputdir, cmd.data);
+			note("initializing database system by running initdb");
+
+			appendStringInfo(&cmd,
+							 "\"%s%sinitdb\" -D \"%s/data\" --no-clean --no-sync",
+							 bindir ? bindir : "",
+							 bindir ? "/" : "",
+							 temp_instance);
+			if (debug)
+				appendStringInfo(&cmd, " --debug");
+			if (nolocale)
+				appendStringInfo(&cmd, " --no-locale");
+			appendStringInfo(&cmd, " > \"%s/log/initdb.log\" 2>&1", outputdir);
+			fflush(NULL);
+			if (system(cmd.data))
+			{
+				bail("initdb failed\n"
+					 "# Examine \"%s/log/initdb.log\" for the reason.\n"
+					 "# Command was: %s",
+					 outputdir, cmd.data);
+			}
+		}
+		else
+		{
+#ifndef WIN32
+			const char *copycmd = "cp -a \"%s\" \"%s/data\"";
+			int			expected_exitcode = 0;
+#else
+			const char *copycmd = "robocopy /E /NJS /NJH /NFL /NDL /NP \"%s\" \"%s/data\"";
+			int			expected_exitcode = 1;	/* 1 denotes files were copied */
+#endif
+
+			note("initializing database system by copying initdb template");
+
+			appendStringInfo(&cmd,
+							 copycmd,
+							 initdb_template_dir,
+							 temp_instance);
+			if (system(cmd.data) != expected_exitcode)
+			{
+				bail("copying of initdb template failed\n"
+					 "# Examine \"%s/log/initdb.log\" for the reason.\n"
+					 "# Command was: %s",
+					 outputdir, cmd.data);
+			}
 		}
 
 		pfree(cmd.data);
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index df9f721a41a..0b4ca0eb6ae 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -397,30 +397,6 @@ check: temp-install
 
 .PHONY: temp-install
 
-temp-install: | submake-generated-headers
-ifndef NO_TEMP_INSTALL
-ifneq ($(abs_top_builddir),)
-ifeq ($(MAKELEVEL),0)
-	rm -rf '$(abs_top_builddir)'/tmp_install
-	$(MKDIR_P) '$(abs_top_builddir)'/tmp_install/log
-	$(MAKE) -C '$(top_builddir)' DESTDIR='$(abs_top_builddir)'/tmp_install install >'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1
-	$(MAKE) -j1 $(if $(CHECKPREP_TOP),-C $(CHECKPREP_TOP),) checkprep >>'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1
-endif
-endif
-endif
-
-# Tasks to run serially at the end of temp-install.  Some EXTRA_INSTALL
-# entries appear more than once in the tree, and parallel installs of the same
-# file can fail with EEXIST.
-checkprep:
-	$(if $(EXTRA_INSTALL),for extra in $(EXTRA_INSTALL); do $(MAKE) -C '$(top_builddir)'/$$extra DESTDIR='$(abs_top_builddir)'/tmp_install install || exit; done)
-
-PROVE = @PROVE@
-# There are common routines in src/test/perl, and some test suites have
-# extra perl modules in their own directory.
-PG_PROVE_FLAGS = -I $(top_srcdir)/src/test/perl/ -I $(srcdir)
-# User-supplied prove flags such as --verbose can be provided in PROVE_FLAGS.
-PROVE_FLAGS =
 
 # prepend to path if already set, else just set it
 define add_to_path
@@ -437,8 +413,36 @@ ld_library_path_var = LD_LIBRARY_PATH
 with_temp_install = \
 	PATH="$(abs_top_builddir)/tmp_install$(bindir):$(CURDIR):$$PATH" \
 	$(call add_to_path,$(strip $(ld_library_path_var)),$(abs_top_builddir)/tmp_install$(libdir)) \
+	INITDB_TEMPLATE='$(abs_top_builddir)'/tmp_install/initdb-template \
 	$(with_temp_install_extra)
 
+temp-install: | submake-generated-headers
+ifndef NO_TEMP_INSTALL
+ifneq ($(abs_top_builddir),)
+ifeq ($(MAKELEVEL),0)
+	rm -rf '$(abs_top_builddir)'/tmp_install
+	$(MKDIR_P) '$(abs_top_builddir)'/tmp_install/log
+	$(MAKE) -C '$(top_builddir)' DESTDIR='$(abs_top_builddir)'/tmp_install install >'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1
+	$(MAKE) -j1 $(if $(CHECKPREP_TOP),-C $(CHECKPREP_TOP),) checkprep >>'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1
+
+	$(with_temp_install) initdb -A trust -N --no-instructions '$(abs_top_builddir)'/tmp_install/initdb-template >>'$(abs_top_builddir)'/tmp_install/log/initdb-template.log 2>&1
+endif
+endif
+endif
+
+# Tasks to run serially at the end of temp-install.  Some EXTRA_INSTALL
+# entries appear more than once in the tree, and parallel installs of the same
+# file can fail with EEXIST.
+checkprep:
+	$(if $(EXTRA_INSTALL),for extra in $(EXTRA_INSTALL); do $(MAKE) -C '$(top_builddir)'/$$extra DESTDIR='$(abs_top_builddir)'/tmp_install install || exit; done)
+
+PROVE = @PROVE@
+# There are common routines in src/test/perl, and some test suites have
+# extra perl modules in their own directory.
+PG_PROVE_FLAGS = -I $(top_srcdir)/src/test/perl/ -I $(srcdir)
+# User-supplied prove flags such as --verbose can be provided in PROVE_FLAGS.
+PROVE_FLAGS =
+
 ifeq ($(enable_tap_tests),yes)
 
 ifndef PGXS
-- 
2.38.0

>From de80483807aeef8eceef175cf2834bae3da3a0e1 Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Mon, 7 Aug 2023 17:27:11 -0700
Subject: [PATCH v1 3/9] ci: macos: Remove use of -DRANDOMIZE_ALLOCATED_MEMORY

RANDOMIZE_ALLOCATED_MEMORY causes a measurable slowdown. Macos is, by far, the
most expensive platform to perform CI on, therefore it doesn't make sense to
run such a test there.

Ubsan and asan on linux should detect most of the the cases of uninitialized
memory, so it doesn't really seem worth using -DRANDOMIZE_ALLOCATED_MEMORY in
another instance type.

Author:
Reviewed-by:
Discussion: https://postgr.es/m/
Backpatch:
---
 .cirrus.yml | 1 -
 1 file changed, 1 deletion(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index f08da65ed76..f3d63ff3fb0 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -434,7 +434,6 @@ task:
 
     CC: ccache cc
     CXX: ccache c++
-    CPPFLAGS: -DRANDOMIZE_ALLOCATED_MEMORY
     CFLAGS: -Og -ggdb
     CXXFLAGS: -Og -ggdb
 
-- 
2.38.0

>From a86a45567151f1b2534cf599f42414edab60e2e0 Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Mon, 7 Aug 2023 17:27:51 -0700
Subject: [PATCH v1 4/9] ci: switch tasks to debugoptimized build

In aggregate the CI tasks burn a lot of cpu hours. Compared to that easy to
read backtraces aren't as important. Still use -ggdb where appropriate, as
that does make backtraces more reliable, particularly in the face of
optimization.

Author:
Reviewed-by:
Discussion: https://postgr.es/m/
Backpatch:
---
 .cirrus.yml | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index f3d63ff3fb0..bfe251f48e8 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -140,7 +140,7 @@ task:
 
     CCACHE_DIR: /tmp/ccache_dir
     CPPFLAGS: -DRELCACHE_FORCE_RELEASE -DCOPY_PARSE_PLAN_TREES -DWRITE_READ_PARSE_PLAN_TREES -DRAW_EXPRESSION_COVERAGE_TEST -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS
-    CFLAGS: -Og -ggdb
+    CFLAGS: -ggdb
 
   depends_on: SanityCheck
   only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\nci-os-only:[^\n]*freebsd.*'
@@ -181,7 +181,7 @@ task:
   configure_script: |
     su postgres <<-EOF
       meson setup \
-        --buildtype=debug \
+        --buildtype=debugoptimized \
         -Dcassert=true -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
@@ -275,7 +275,7 @@ task:
     ASAN_OPTIONS: print_stacktrace=1:disable_coredump=0:abort_on_error=1:detect_leaks=0
 
     # SANITIZER_FLAGS is set in the tasks below
-    CFLAGS: -Og -ggdb -fno-sanitize-recover=all $SANITIZER_FLAGS
+    CFLAGS: -ggdb -fno-sanitize-recover=all $SANITIZER_FLAGS
     CXXFLAGS: $CFLAGS
     LDFLAGS: $SANITIZER_FLAGS
     CC: ccache gcc
@@ -364,7 +364,7 @@ task:
       configure_script: |
         su postgres <<-EOF
           meson setup \
-            --buildtype=debug \
+            --buildtype=debugoptimized \
             -Dcassert=true \
             ${LINUX_MESON_FEATURES} \
             -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
@@ -377,7 +377,7 @@ task:
         su postgres <<-EOF
           export CC='ccache gcc -m32'
           meson setup \
-            --buildtype=debug \
+            --buildtype=debugoptimized \
             -Dcassert=true \
             ${LINUX_MESON_FEATURES} \
             -Dllvm=disabled \
@@ -434,8 +434,8 @@ task:
 
     CC: ccache cc
     CXX: ccache c++
-    CFLAGS: -Og -ggdb
-    CXXFLAGS: -Og -ggdb
+    CFLAGS: -ggdb
+    CXXFLAGS: -ggdb
 
   depends_on: SanityCheck
   only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\nci-os-only:[^\n]*(macos|darwin|osx).*'
@@ -487,7 +487,7 @@ task:
   configure_script: |
     export PKG_CONFIG_PATH="/opt/local/lib/pkgconfig/"
     meson setup \
-      --buildtype=debug \
+      --buildtype=debugoptimized \
       -Dextra_include_dirs=/opt/local/include \
       -Dextra_lib_dirs=/opt/local/lib \
       -Dcassert=true \
-- 
2.38.0

>From f888e984557e0ba79bf04263a962825dbc520c2d Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Sat, 5 Aug 2023 13:36:28 -0700
Subject: [PATCH v1 5/9] ci: Move use of -Dsegsize_blocks=6 from macos to linux

The option causes a measurable slowdown. Macos is, by far, the most expensive
platform to perform CI on, therefore move the option to another task. In
addition, the filesystem overhead seems to impact macos worse than other
platforms.
---
 .cirrus.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index bfe251f48e8..35ef9c97211 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -366,6 +366,7 @@ task:
           meson setup \
             --buildtype=debugoptimized \
             -Dcassert=true \
+            -Dsegsize_blocks=6 \
             ${LINUX_MESON_FEATURES} \
             -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
             build
@@ -492,7 +493,6 @@ task:
       -Dextra_lib_dirs=/opt/local/lib \
       -Dcassert=true \
       -Duuid=e2fs -Ddtrace=auto \
-      -Dsegsize_blocks=6 \
       -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
       build
 
-- 
2.38.0

>From 7169a36f31568782d55cb2e57c5a3d6c87f23607 Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Mon, 7 Aug 2023 16:56:29 -0700
Subject: [PATCH v1 6/9] ci: windows: Disabling write cache flushing during
 test

This has been measured to reduce windows test times by about 30s.
---
 .cirrus.yml                          | 2 ++
 src/tools/ci/windows_write_cache.ps1 | 3 +++
 2 files changed, 5 insertions(+)
 create mode 100644 src/tools/ci/windows_write_cache.ps1

diff --git a/.cirrus.yml b/.cirrus.yml
index 35ef9c97211..ef825485826 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -559,6 +559,8 @@ task:
   setup_additional_packages_script: |
     REM choco install -y --no-progress ...
 
+  change_write_caching_script: powershell src/tools/ci/windows_write_cache.ps1
+
   setup_hosts_file_script: |
     echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
     echo 127.0.0.2 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
diff --git a/src/tools/ci/windows_write_cache.ps1 b/src/tools/ci/windows_write_cache.ps1
new file mode 100644
index 00000000000..9c52bc886d4
--- /dev/null
+++ b/src/tools/ci/windows_write_cache.ps1
@@ -0,0 +1,3 @@
+Get-ItemProperty -path "HKLM:/SYSTEM/CurrentControlSet/Enum/SCSI/*/*/Device Parameters/Disk" -name CacheIsPowerProtected
+Set-ItemProperty -path "HKLM:/SYSTEM/CurrentControlSet/Enum/SCSI/*/*/Device Parameters/Disk" -name CacheIsPowerProtected -Value 0
+Get-ItemProperty -path "HKLM:/SYSTEM/CurrentControlSet/Enum/SCSI/*/*/Device Parameters/Disk" -name CacheIsPowerProtected
-- 
2.38.0

>From 99089f7cfbbfb2d10074009716a1420163bc6697 Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Mon, 7 Aug 2023 16:51:16 -0700
Subject: [PATCH v1 7/9] regress: Check for postgres startup completion more
 often

Previously pg_regress.c only checked whether the server started up once a
second - in most cases startup is much faster though. Use the same interval as
pg_ctl does.
---
 src/test/regress/pg_regress.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c
index 407e3915cec..46af1fddbdb 100644
--- a/src/test/regress/pg_regress.c
+++ b/src/test/regress/pg_regress.c
@@ -75,6 +75,9 @@ const char *pretty_diff_opts = "-w -U3";
  */
 #define TESTNAME_WIDTH 36
 
+/* how often to recheck if postgres startup completed */
+#define WAITS_PER_SEC	10
+
 typedef enum TAPtype
 {
 	DIAG = 0,
@@ -2499,7 +2502,7 @@ regression_main(int argc, char *argv[],
 		else
 			wait_seconds = 60;
 
-		for (i = 0; i < wait_seconds; i++)
+		for (i = 0; i < wait_seconds * WAITS_PER_SEC; i++)
 		{
 			/* Done if psql succeeds */
 			fflush(NULL);
@@ -2519,7 +2522,7 @@ regression_main(int argc, char *argv[],
 					 outputdir);
 			}
 
-			pg_usleep(1000000L);
+			pg_usleep(1000000L / WAITS_PER_SEC);
 		}
 		if (i >= wait_seconds)
 		{
-- 
2.38.0

>From 9edf90d5bfbb1933c68525481636f2e33ea3b1b4 Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Mon, 7 Aug 2023 17:31:15 -0700
Subject: [PATCH v1 8/9] ci: Don't specify amount of memory

The number of CPUs is the cost-determining factor. Most instance types that
run tests have more memory/core than what we specified, there's no real
benefit in wasting that.
---
 .cirrus.yml | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index ef825485826..a4d64955489 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -150,7 +150,6 @@ task:
     image: family/pg-ci-freebsd-13
     platform: freebsd
     cpu: $CPUS
-    memory: 4G
     disk: 50
 
   sysinfo_script: |
@@ -292,7 +291,6 @@ task:
     image: family/pg-ci-bullseye
     platform: linux
     cpu: $CPUS
-    memory: 4G
 
   ccache_cache:
     folder: ${CCACHE_DIR}
@@ -554,7 +552,6 @@ task:
     image: family/pg-ci-windows-ci-vs-2019
     platform: windows
     cpu: $CPUS
-    memory: 4G
 
   setup_additional_packages_script: |
     REM choco install -y --no-progress ...
@@ -604,7 +601,6 @@ task:
     image: family/pg-ci-windows-ci-mingw64
     platform: windows
     cpu: $CPUS
-    memory: 4G
 
   env:
     TEST_JOBS: 4 # higher concurrency causes occasional failures
-- 
2.38.0

Reply via email to