Hi,

On 2016-03-17 09:01:36 -0400, Robert Haas wrote:
> 0001: Looking at this again, I'm no longer sure this is a bug.
> Doesn't your patch just check the same conditions in the opposite
> order?

Which is important, because what's in what pfds[x] depends on
wakeEvents. Folded it into a later patch; it's not harmful as long as
we're only ever testing pfds[0].


> 0003: Mostly boring.  But the change to win32_latch.c seems to remove
> an unrelated check.

Argh.


> 0004:
> 
> +         * drain it everytime WaitLatchOrSocket() is used. Should the
> +         * pipe-buffer fill up in some scenarios - widly unlikely - we're
> 
> every time
> wildly
> 
> Why is it wildly (or widly) unlikely?

Because SetLatch (if the owner) check latch->is_set before adding to the
pipe, and latch_sigusr1_handler() only writes to the pipe if the current
process is in WaitLatchOrSocket's loop (via the waiting check). Expanded
comment.


> 
> +         * Check again wether latch is set, the arrival of a signal/self-byte
> 
> whether.  Also not clearly related to the patch's main purpose.

After the change there's no need to re-compute the current timestamp
anymore, that does seem beneficial and kinda related.


>              /* at least one event occurred, so check masks */
> +            if (FD_ISSET(selfpipe_readfd, &input_mask))
> +            {
> +                /* There's data in the self-pipe, clear it. */
> +                drainSelfPipe();
> +            }
> 
> The comment just preceding this added hunk now seems to be out of
> place, and maybe inaccurate as well.

Hm. Which comment are you exactly referring to?
                        /* at least one event occurred, so check masks */
seems not to fit the bill?


> I think the new code could have
> a bit more detailed comment.  My understanding is something like /*
> Since we didn't clear the self-pipe before attempting to wait,
> select() may have returned immediately even though there has been no
> recent change to the state of the latch.  To prevent busy-looping, we
> must clear the pipe before attempting to wait again. */

Isn't that explained at the top, in
                /*
                 * Check if the latch is set already. If so, leave loop 
immediately,
                 * avoid blocking again. We don't attempt to report any other 
events
                 * that might also be satisfied.
                 *
                 * If someone sets the latch between this and the 
poll()/select()
                 * below, the setter will write a byte to the pipe (or signal 
us and
                 * the signal handler will do that), and the poll()/select() 
will
                 * return immediately.
                 *
                 * If there's a pending byte in the self pipe, we'll notice 
whenever
                 * blocking. Only clearing the pipe in that case avoids having 
to
                 * drain it every time WaitLatchOrSocket() is used. Should the
                 * pipe-buffer fill up in some scenarios - wildly unlikely - 
we're
                 * still ok, because the pipe is in nonblocking mode.
?

I've updated the last paragraph to
                 * If there's a pending byte in the self pipe, we'll notice 
whenever
                 * blocking. Only clearing the pipe in that case avoids having 
to
                 * drain it every time WaitLatchOrSocket() is used. Should the
                 * pipe-buffer fill up we're still ok, because the pipe is in
                 * nonblocking mode. It's unlikely for that to happen, because 
the
                 * self pipe isn't filled unless we're blocking (waiting = 
true), or
                 * from inside a signal handler in latch_sigusr1_handler().

I've also applied the same optimization to windows. Less because I found
that interesting in itself, and more because it makes the WaitEventSet
easier.


Attached is a significantly revised version of the earlier series. Most
importantly I have:
* Unified the window/unix latch implementation into one file (0004)
* Provided a select(2) implementation for the WaitEventSet API
* Provided a windows implementation for the WaitEventSet API
* Reduced duplication between the implementations a good bit by
  splitting WaitEventSetWait into WaitEventSetWait and
  WaitEventSetWaitBlock. Only the latter is implemented separately for
  each readiness primitive
* Added a backward-compatibility implementation of WaitLatchOrSocket
  using the WaitEventSet stuff. Less because I thought that to be
  terribly important, and more because it makes the patch a *lot*
  smaller.  We collected a fair amount of latch users.

This is still not fully ready. The main reamining items are testing (the
windows stuff I've only verified using cross-compiling with mingw) and
documentation.

I'd greatly appreciate a look.

Amit, you offered testing on windows; could you check whether 3/4/5
work? It's quite likely that I've screwed up something.


Robert you'd mentioned on IM that you've a use-case for this somewhere
around multiple FDWs. If somebody has started working on that, could you
ask that person to check whether the API makes sense?

Greetings,

Andres Freund
>From 916b95e211aa017643088ba7cbb239545ac8d944 Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Fri, 18 Mar 2016 00:52:07 -0700
Subject: [PATCH 1/5] Make it easier to choose the used waiting primitive in
 unix_latch.c.

This allows for easier testing of the different primitives; in
preparation for adding a new primitive.

Discussion: 20160114143931.gg10...@awork2.anarazel.de
Reviewed-By: Robert Haas
---
 src/backend/port/unix_latch.c | 50 +++++++++++++++++++++++++++++--------------
 1 file changed, 34 insertions(+), 16 deletions(-)

diff --git a/src/backend/port/unix_latch.c b/src/backend/port/unix_latch.c
index 2ad609c..f52704b 100644
--- a/src/backend/port/unix_latch.c
+++ b/src/backend/port/unix_latch.c
@@ -56,6 +56,22 @@
 #include "storage/pmsignal.h"
 #include "storage/shmem.h"
 
+/*
+ * Select the fd readiness primitive to use. Normally the "most modern"
+ * primitive supported by the OS will be used, but for testing it can be
+ * useful to manually specify the used primitive.  If desired, just add a
+ * define somewhere before this block.
+ */
+#if defined(LATCH_USE_POLL) || defined(LATCH_USE_SELECT)
+/* don't overwrite manual choice */
+#elif defined(HAVE_POLL)
+#define LATCH_USE_POLL
+#elif HAVE_SYS_SELECT_H
+#define LATCH_USE_SELECT
+#else
+#error "no latch implementation available"
+#endif
+
 /* Are we currently in WaitLatch? The signal handler would like to know. */
 static volatile sig_atomic_t waiting = false;
 
@@ -215,10 +231,10 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 				cur_time;
 	long		cur_timeout;
 
-#ifdef HAVE_POLL
+#if defined(LATCH_USE_POLL)
 	struct pollfd pfds[3];
 	int			nfds;
-#else
+#elif defined(LATCH_USE_SELECT)
 	struct timeval tv,
 			   *tvp;
 	fd_set		input_mask;
@@ -247,7 +263,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 		Assert(timeout >= 0 && timeout <= INT_MAX);
 		cur_timeout = timeout;
 
-#ifndef HAVE_POLL
+#ifdef LATCH_USE_SELECT
 		tv.tv_sec = cur_timeout / 1000L;
 		tv.tv_usec = (cur_timeout % 1000L) * 1000L;
 		tvp = &tv;
@@ -257,7 +273,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 	{
 		cur_timeout = -1;
 
-#ifndef HAVE_POLL
+#ifdef LATCH_USE_SELECT
 		tvp = NULL;
 #endif
 	}
@@ -291,16 +307,10 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 		}
 
 		/*
-		 * Must wait ... we use poll(2) if available, otherwise select(2).
-		 *
-		 * On at least older linux kernels select(), in violation of POSIX,
-		 * doesn't reliably return a socket as writable if closed - but we
-		 * rely on that. So far all the known cases of this problem are on
-		 * platforms that also provide a poll() implementation without that
-		 * bug.  If we find one where that's not the case, we'll need to add a
-		 * workaround.
+		 * Must wait ... we use the polling interface determined at the top of
+		 * this file to do so.
 		 */
-#ifdef HAVE_POLL
+#if defined(LATCH_USE_POLL)
 		nfds = 0;
 		if (wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE))
 		{
@@ -396,8 +406,16 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 					result |= WL_POSTMASTER_DEATH;
 			}
 		}
-#else							/* !HAVE_POLL */
+#elif defined(LATCH_USE_SELECT)
 
+		/*
+		 * On at least older linux kernels select(), in violation of POSIX,
+		 * doesn't reliably return a socket as writable if closed - but we
+		 * rely on that. So far all the known cases of this problem are on
+		 * platforms that also provide a poll() implementation without that
+		 * bug.  If we find one where that's not the case, we'll need to add a
+		 * workaround.
+		 */
 		FD_ZERO(&input_mask);
 		FD_ZERO(&output_mask);
 
@@ -477,7 +495,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 					result |= WL_POSTMASTER_DEATH;
 			}
 		}
-#endif   /* HAVE_POLL */
+#endif   /* LATCH_USE_SELECT */
 
 		/* If we're not done, update cur_timeout for next iteration */
 		if (result == 0 && (wakeEvents & WL_TIMEOUT))
@@ -490,7 +508,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 				/* Timeout has expired, no need to continue looping */
 				result |= WL_TIMEOUT;
 			}
-#ifndef HAVE_POLL
+#ifdef LATCH_USE_SELECT
 			else
 			{
 				tv.tv_sec = cur_timeout / 1000L;
-- 
2.7.0.229.g701fa7f

>From 2eeb7dd4f6401a4f2d45293cddd505018aa4431e Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Fri, 18 Mar 2016 00:52:07 -0700
Subject: [PATCH 2/5] Error out if waiting on socket readiness without a
 specified socket.

Previously we just ignored such an attempt, but that seems to serve no
purpose but making things harder to debug.

Discussion: 20160114143931.gg10...@awork2.anarazel.de
    20151230173734.hx7jj2fnwyljf...@alap3.anarazel.de
Reviewed-By: Robert Haas
---
 src/backend/port/unix_latch.c  | 9 +++++----
 src/backend/port/win32_latch.c | 9 +++++----
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/src/backend/port/unix_latch.c b/src/backend/port/unix_latch.c
index f52704b..e7be7ec 100644
--- a/src/backend/port/unix_latch.c
+++ b/src/backend/port/unix_latch.c
@@ -242,12 +242,13 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 	int			hifd;
 #endif
 
-	/* Ignore WL_SOCKET_* events if no valid socket is given */
-	if (sock == PGINVALID_SOCKET)
-		wakeEvents &= ~(WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE);
-
 	Assert(wakeEvents != 0);	/* must have at least one wake event */
 
+	/* waiting for socket readiness without a socket indicates a bug */
+	if (sock == PGINVALID_SOCKET &&
+		(wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) != 0)
+		elog(ERROR, "cannot wait on socket event without a socket");
+
 	if ((wakeEvents & WL_LATCH_SET) && latch->owner_pid != MyProcPid)
 		elog(ERROR, "cannot wait on a latch owned by another process");
 
diff --git a/src/backend/port/win32_latch.c b/src/backend/port/win32_latch.c
index 80adc13..b1b0713 100644
--- a/src/backend/port/win32_latch.c
+++ b/src/backend/port/win32_latch.c
@@ -113,12 +113,13 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 	int			result = 0;
 	int			pmdeath_eventno = 0;
 
-	/* Ignore WL_SOCKET_* events if no valid socket is given */
-	if (sock == PGINVALID_SOCKET)
-		wakeEvents &= ~(WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE);
-
 	Assert(wakeEvents != 0);	/* must have at least one wake event */
 
+	/* waiting for socket readiness without a socket indicates a bug */
+	if (sock == PGINVALID_SOCKET &&
+		(wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) != 0)
+		elog(ERROR, "cannot wait on socket event without a socket");
+
 	if ((wakeEvents & WL_LATCH_SET) && latch->owner_pid != MyProcPid)
 		elog(ERROR, "cannot wait on a latch owned by another process");
 
-- 
2.7.0.229.g701fa7f

>From d76ac6f857c4c273a54b3f9b914363587667f435 Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Fri, 18 Mar 2016 00:52:07 -0700
Subject: [PATCH 3/5] Only clear latch self-pipe/event if there is a pending
 notification.

This avoids a good number of, individually quite fast, system calls in
scenarios with many quick queries. Besides the aesthetic benefit of
seing fewer superflous system calls with strace, it also improves
performance by ~2% measured by pgbench -M prepared -c 96 -j 8 -S (scale
100).

Without having benchmarked it, this patch also adjust the windows code,
as that makes it easier to unify the unix/windows codepaths in a later
patch. There's little reason to diverge in behaviour between the
platforms.

Discussion: CA+TgmoYc1Zm+Szoc_Qbzi92z2c1vRHZmjhfPn5uC=w8bxv6...@mail.gmail.com
Reviewed-By: Robert Haas
---
 src/backend/port/unix_latch.c  | 81 ++++++++++++++++++++++++++++--------------
 src/backend/port/win32_latch.c | 19 +++++-----
 2 files changed, 65 insertions(+), 35 deletions(-)

diff --git a/src/backend/port/unix_latch.c b/src/backend/port/unix_latch.c
index e7be7ec..104401d 100644
--- a/src/backend/port/unix_latch.c
+++ b/src/backend/port/unix_latch.c
@@ -283,27 +283,31 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 	do
 	{
 		/*
-		 * Clear the pipe, then check if the latch is set already. If someone
-		 * sets the latch between this and the poll()/select() below, the
-		 * setter will write a byte to the pipe (or signal us and the signal
-		 * handler will do that), and the poll()/select() will return
-		 * immediately.
+		 * Check if the latch is set already. If so, leave loop immediately,
+		 * avoid blocking again. We don't attempt to report any other events
+		 * that might also be satisfied.
+		 *
+		 * If someone sets the latch between this and the poll()/select()
+		 * below, the setter will write a byte to the pipe (or signal us and
+		 * the signal handler will do that), and the poll()/select() will
+		 * return immediately.
+		 *
+		 * If there's a pending byte in the self pipe, we'll notice whenever
+		 * blocking. Only clearing the pipe in that case avoids having to
+		 * drain it every time WaitLatchOrSocket() is used. Should the
+		 * pipe-buffer fill up we're still ok, because the pipe is in
+		 * nonblocking mode. It's unlikely for that to happen, because the
+		 * self pipe isn't filled unless we're blocking (waiting = true), or
+		 * from inside a signal handler in latch_sigusr1_handler().
 		 *
 		 * Note: we assume that the kernel calls involved in drainSelfPipe()
 		 * and SetLatch() will provide adequate synchronization on machines
 		 * with weak memory ordering, so that we cannot miss seeing is_set if
 		 * the signal byte is already in the pipe when we drain it.
 		 */
-		drainSelfPipe();
-
 		if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
 		{
 			result |= WL_LATCH_SET;
-
-			/*
-			 * Leave loop immediately, avoid blocking again. We don't attempt
-			 * to report any other events that might also be satisfied.
-			 */
 			break;
 		}
 
@@ -313,24 +317,26 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 		 */
 #if defined(LATCH_USE_POLL)
 		nfds = 0;
+
+		/* selfpipe is always in pfds[0] */
+		pfds[0].fd = selfpipe_readfd;
+		pfds[0].events = POLLIN;
+		pfds[0].revents = 0;
+		nfds++;
+
 		if (wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE))
 		{
-			/* socket, if used, is always in pfds[0] */
-			pfds[0].fd = sock;
-			pfds[0].events = 0;
+			/* socket, if used, is always in pfds[1] */
+			pfds[1].fd = sock;
+			pfds[1].events = 0;
 			if (wakeEvents & WL_SOCKET_READABLE)
-				pfds[0].events |= POLLIN;
+				pfds[1].events |= POLLIN;
 			if (wakeEvents & WL_SOCKET_WRITEABLE)
-				pfds[0].events |= POLLOUT;
-			pfds[0].revents = 0;
+				pfds[1].events |= POLLOUT;
+			pfds[1].revents = 0;
 			nfds++;
 		}
 
-		pfds[nfds].fd = selfpipe_readfd;
-		pfds[nfds].events = POLLIN;
-		pfds[nfds].revents = 0;
-		nfds++;
-
 		if (wakeEvents & WL_POSTMASTER_DEATH)
 		{
 			/* postmaster fd, if used, is always in pfds[nfds - 1] */
@@ -364,19 +370,27 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 		else
 		{
 			/* at least one event occurred, so check revents values */
+
+			if (pfds[0].revents & POLLIN)
+			{
+				/* There's data in the self-pipe, clear it. */
+				drainSelfPipe();
+			}
+
 			if ((wakeEvents & WL_SOCKET_READABLE) &&
-				(pfds[0].revents & POLLIN))
+				(pfds[1].revents & POLLIN))
 			{
 				/* data available in socket, or EOF/error condition */
 				result |= WL_SOCKET_READABLE;
 			}
 			if ((wakeEvents & WL_SOCKET_WRITEABLE) &&
-				(pfds[0].revents & POLLOUT))
+				(pfds[1].revents & POLLOUT))
 			{
 				/* socket is writable */
 				result |= WL_SOCKET_WRITEABLE;
 			}
-			if (pfds[0].revents & (POLLHUP | POLLERR | POLLNVAL))
+			if ((wakeEvents & WL_SOCKET_WRITEABLE) &&
+				(pfds[1].revents & (POLLHUP | POLLERR | POLLNVAL)))
 			{
 				/* EOF/error condition */
 				if (wakeEvents & WL_SOCKET_READABLE)
@@ -468,6 +482,11 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 		else
 		{
 			/* at least one event occurred, so check masks */
+			if (FD_ISSET(selfpipe_readfd, &input_mask))
+			{
+				/* There's data in the self-pipe, clear it. */
+				drainSelfPipe();
+			}
 			if ((wakeEvents & WL_SOCKET_READABLE) && FD_ISSET(sock, &input_mask))
 			{
 				/* data available in socket, or EOF */
@@ -498,6 +517,16 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 		}
 #endif   /* LATCH_USE_SELECT */
 
+		/*
+		 * Check again whether latch is set, the arrival of a signal/self-byte
+		 * might be what stopped our sleep. It's not required for correctness
+		 * to signal the latch as being set (we'd just loop if there's no
+		 * other event), but it seems good to report an arrived latch asap.
+		 * This way we also don't have to compute the current timestamp again.
+		 */
+		if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
+			result |= WL_LATCH_SET;
+
 		/* If we're not done, update cur_timeout for next iteration */
 		if (result == 0 && (wakeEvents & WL_TIMEOUT))
 		{
diff --git a/src/backend/port/win32_latch.c b/src/backend/port/win32_latch.c
index b1b0713..bbf1b24 100644
--- a/src/backend/port/win32_latch.c
+++ b/src/backend/port/win32_latch.c
@@ -181,14 +181,11 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 	do
 	{
 		/*
-		 * Reset the event, and check if the latch is set already. If someone
-		 * sets the latch between this and the WaitForMultipleObjects() call
-		 * below, the setter will set the event and WaitForMultipleObjects()
-		 * will return immediately.
+		 * The comment in unix_latch.c's equivalent to this applies here as
+		 * well. At least after mentally replacing self-pipe with windows
+		 * event. There's no danger of overflowing, as "Setting an event that
+		 * is already set has no effect.".
 		 */
-		if (!ResetEvent(latchevent))
-			elog(ERROR, "ResetEvent failed: error code %lu", GetLastError());
-
 		if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
 		{
 			result |= WL_LATCH_SET;
@@ -217,9 +214,13 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 		else if (rc == WAIT_OBJECT_0 + 1)
 		{
 			/*
-			 * Latch is set.  We'll handle that on next iteration of loop, but
-			 * let's not waste the cycles to update cur_timeout below.
+			 * Reset the event.  We'll re-check the, potentially, set latch on
+			 * next iteration of loop, but let's not waste the cycles to
+			 * update cur_timeout below.
 			 */
+			if (!ResetEvent(latchevent))
+				elog(ERROR, "ResetEvent failed: error code %lu", GetLastError());
+
 			continue;
 		}
 		else if ((wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) &&
-- 
2.7.0.229.g701fa7f

>From 1d444b0855dbf65d66d73beb647b772fff3404c8 Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Fri, 18 Mar 2016 00:52:07 -0700
Subject: [PATCH 4/5] Combine win32 and unix latch implementations.

Previously latches for windows and unix had been implemented in
different files. The next patch in this series will introduce an
expanded wait infrastructure, keeping the implementation separate would
introduce too much duplication.

This basically just moves the functions, without too much change. The
reason to keep this separate is that it allows blame to continue working
a little less badly; and to make review a tiny bit easier.
---
 configure                                          |  10 +-
 configure.in                                       |   8 -
 src/backend/Makefile                               |   3 +-
 src/backend/port/.gitignore                        |   1 -
 src/backend/port/Makefile                          |   2 +-
 src/backend/port/win32_latch.c                     | 349 ---------------------
 src/backend/storage/ipc/Makefile                   |   5 +-
 .../{port/unix_latch.c => storage/ipc/latch.c}     | 280 ++++++++++++++++-
 src/include/storage/latch.h                        |   2 +-
 src/tools/msvc/Mkvcbuild.pm                        |   2 -
 10 files changed, 277 insertions(+), 385 deletions(-)
 delete mode 100644 src/backend/port/win32_latch.c
 rename src/backend/{port/unix_latch.c => storage/ipc/latch.c} (74%)

diff --git a/configure b/configure
index a45be67..c10d954 100755
--- a/configure
+++ b/configure
@@ -14786,13 +14786,6 @@ $as_echo "#define USE_WIN32_SHARED_MEMORY 1" >>confdefs.h
   SHMEM_IMPLEMENTATION="src/backend/port/win32_shmem.c"
 fi
 
-# Select latch implementation type.
-if test "$PORTNAME" != "win32"; then
-  LATCH_IMPLEMENTATION="src/backend/port/unix_latch.c"
-else
-  LATCH_IMPLEMENTATION="src/backend/port/win32_latch.c"
-fi
-
 # If not set in template file, set bytes to use libc memset()
 if test x"$MEMSET_LOOP_LIMIT" = x"" ; then
   MEMSET_LOOP_LIMIT=1024
@@ -15868,7 +15861,7 @@ fi
 ac_config_files="$ac_config_files GNUmakefile src/Makefile.global"
 
 
-ac_config_links="$ac_config_links src/backend/port/dynloader.c:src/backend/port/dynloader/${template}.c src/backend/port/pg_sema.c:${SEMA_IMPLEMENTATION} src/backend/port/pg_shmem.c:${SHMEM_IMPLEMENTATION} src/backend/port/pg_latch.c:${LATCH_IMPLEMENTATION} src/include/dynloader.h:src/backend/port/dynloader/${template}.h src/include/pg_config_os.h:src/include/port/${template}.h src/Makefile.port:src/makefiles/Makefile.${template}"
+ac_config_links="$ac_config_links src/backend/port/dynloader.c:src/backend/port/dynloader/${template}.c src/backend/port/pg_sema.c:${SEMA_IMPLEMENTATION} src/backend/port/pg_shmem.c:${SHMEM_IMPLEMENTATION} src/include/dynloader.h:src/backend/port/dynloader/${template}.h src/include/pg_config_os.h:src/include/port/${template}.h src/Makefile.port:src/makefiles/Makefile.${template}"
 
 
 if test "$PORTNAME" = "win32"; then
@@ -16592,7 +16585,6 @@ do
     "src/backend/port/dynloader.c") CONFIG_LINKS="$CONFIG_LINKS src/backend/port/dynloader.c:src/backend/port/dynloader/${template}.c" ;;
     "src/backend/port/pg_sema.c") CONFIG_LINKS="$CONFIG_LINKS src/backend/port/pg_sema.c:${SEMA_IMPLEMENTATION}" ;;
     "src/backend/port/pg_shmem.c") CONFIG_LINKS="$CONFIG_LINKS src/backend/port/pg_shmem.c:${SHMEM_IMPLEMENTATION}" ;;
-    "src/backend/port/pg_latch.c") CONFIG_LINKS="$CONFIG_LINKS src/backend/port/pg_latch.c:${LATCH_IMPLEMENTATION}" ;;
     "src/include/dynloader.h") CONFIG_LINKS="$CONFIG_LINKS src/include/dynloader.h:src/backend/port/dynloader/${template}.h" ;;
     "src/include/pg_config_os.h") CONFIG_LINKS="$CONFIG_LINKS src/include/pg_config_os.h:src/include/port/${template}.h" ;;
     "src/Makefile.port") CONFIG_LINKS="$CONFIG_LINKS src/Makefile.port:src/makefiles/Makefile.${template}" ;;
diff --git a/configure.in b/configure.in
index c298926..47d0f58 100644
--- a/configure.in
+++ b/configure.in
@@ -1976,13 +1976,6 @@ else
   SHMEM_IMPLEMENTATION="src/backend/port/win32_shmem.c"
 fi
 
-# Select latch implementation type.
-if test "$PORTNAME" != "win32"; then
-  LATCH_IMPLEMENTATION="src/backend/port/unix_latch.c"
-else
-  LATCH_IMPLEMENTATION="src/backend/port/win32_latch.c"
-fi
-
 # If not set in template file, set bytes to use libc memset()
 if test x"$MEMSET_LOOP_LIMIT" = x"" ; then
   MEMSET_LOOP_LIMIT=1024
@@ -2178,7 +2171,6 @@ AC_CONFIG_LINKS([
   src/backend/port/dynloader.c:src/backend/port/dynloader/${template}.c
   src/backend/port/pg_sema.c:${SEMA_IMPLEMENTATION}
   src/backend/port/pg_shmem.c:${SHMEM_IMPLEMENTATION}
-  src/backend/port/pg_latch.c:${LATCH_IMPLEMENTATION}
   src/include/dynloader.h:src/backend/port/dynloader/${template}.h
   src/include/pg_config_os.h:src/include/port/${template}.h
   src/Makefile.port:src/makefiles/Makefile.${template}
diff --git a/src/backend/Makefile b/src/backend/Makefile
index b3d5e2e..d22dbbf 100644
--- a/src/backend/Makefile
+++ b/src/backend/Makefile
@@ -306,8 +306,7 @@ ifeq ($(PORTNAME), win32)
 endif
 
 distclean: clean
-	rm -f port/tas.s port/dynloader.c port/pg_sema.c port/pg_shmem.c \
-	      port/pg_latch.c
+	rm -f port/tas.s port/dynloader.c port/pg_sema.c port/pg_shmem.c
 
 maintainer-clean: distclean
 	rm -f bootstrap/bootparse.c \
diff --git a/src/backend/port/.gitignore b/src/backend/port/.gitignore
index 7d3ac4a..9f4f1af 100644
--- a/src/backend/port/.gitignore
+++ b/src/backend/port/.gitignore
@@ -1,5 +1,4 @@
 /dynloader.c
-/pg_latch.c
 /pg_sema.c
 /pg_shmem.c
 /tas.s
diff --git a/src/backend/port/Makefile b/src/backend/port/Makefile
index c6b1d20..89549d0 100644
--- a/src/backend/port/Makefile
+++ b/src/backend/port/Makefile
@@ -21,7 +21,7 @@ subdir = src/backend/port
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o pg_latch.o $(TAS)
+OBJS = atomics.o dynloader.o pg_sema.o pg_shmem.o $(TAS)
 
 ifeq ($(PORTNAME), darwin)
 SUBDIRS += darwin
diff --git a/src/backend/port/win32_latch.c b/src/backend/port/win32_latch.c
deleted file mode 100644
index bbf1b24..0000000
--- a/src/backend/port/win32_latch.c
+++ /dev/null
@@ -1,349 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * win32_latch.c
- *	  Routines for inter-process latches
- *
- * See unix_latch.c for header comments for the exported functions;
- * the API presented here is supposed to be the same as there.
- *
- * The Windows implementation uses Windows events that are inherited by
- * all postmaster child processes.
- *
- * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
- * Portions Copyright (c) 1994, Regents of the University of California
- *
- * IDENTIFICATION
- *	  src/backend/port/win32_latch.c
- *
- *-------------------------------------------------------------------------
- */
-#include "postgres.h"
-
-#include <fcntl.h>
-#include <limits.h>
-#include <signal.h>
-#include <unistd.h>
-
-#include "miscadmin.h"
-#include "portability/instr_time.h"
-#include "postmaster/postmaster.h"
-#include "storage/barrier.h"
-#include "storage/latch.h"
-#include "storage/pmsignal.h"
-#include "storage/shmem.h"
-
-
-void
-InitializeLatchSupport(void)
-{
-	/* currently, nothing to do here for Windows */
-}
-
-void
-InitLatch(volatile Latch *latch)
-{
-	latch->is_set = false;
-	latch->owner_pid = MyProcPid;
-	latch->is_shared = false;
-
-	latch->event = CreateEvent(NULL, TRUE, FALSE, NULL);
-	if (latch->event == NULL)
-		elog(ERROR, "CreateEvent failed: error code %lu", GetLastError());
-}
-
-void
-InitSharedLatch(volatile Latch *latch)
-{
-	SECURITY_ATTRIBUTES sa;
-
-	latch->is_set = false;
-	latch->owner_pid = 0;
-	latch->is_shared = true;
-
-	/*
-	 * Set up security attributes to specify that the events are inherited.
-	 */
-	ZeroMemory(&sa, sizeof(sa));
-	sa.nLength = sizeof(sa);
-	sa.bInheritHandle = TRUE;
-
-	latch->event = CreateEvent(&sa, TRUE, FALSE, NULL);
-	if (latch->event == NULL)
-		elog(ERROR, "CreateEvent failed: error code %lu", GetLastError());
-}
-
-void
-OwnLatch(volatile Latch *latch)
-{
-	/* Sanity checks */
-	Assert(latch->is_shared);
-	if (latch->owner_pid != 0)
-		elog(ERROR, "latch already owned");
-
-	latch->owner_pid = MyProcPid;
-}
-
-void
-DisownLatch(volatile Latch *latch)
-{
-	Assert(latch->is_shared);
-	Assert(latch->owner_pid == MyProcPid);
-
-	latch->owner_pid = 0;
-}
-
-int
-WaitLatch(volatile Latch *latch, int wakeEvents, long timeout)
-{
-	return WaitLatchOrSocket(latch, wakeEvents, PGINVALID_SOCKET, timeout);
-}
-
-int
-WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
-				  long timeout)
-{
-	DWORD		rc;
-	instr_time	start_time,
-				cur_time;
-	long		cur_timeout;
-	HANDLE		events[4];
-	HANDLE		latchevent;
-	HANDLE		sockevent = WSA_INVALID_EVENT;
-	int			numevents;
-	int			result = 0;
-	int			pmdeath_eventno = 0;
-
-	Assert(wakeEvents != 0);	/* must have at least one wake event */
-
-	/* waiting for socket readiness without a socket indicates a bug */
-	if (sock == PGINVALID_SOCKET &&
-		(wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) != 0)
-		elog(ERROR, "cannot wait on socket event without a socket");
-
-	if ((wakeEvents & WL_LATCH_SET) && latch->owner_pid != MyProcPid)
-		elog(ERROR, "cannot wait on a latch owned by another process");
-
-	/*
-	 * Initialize timeout if requested.  We must record the current time so
-	 * that we can determine the remaining timeout if WaitForMultipleObjects
-	 * is interrupted.
-	 */
-	if (wakeEvents & WL_TIMEOUT)
-	{
-		INSTR_TIME_SET_CURRENT(start_time);
-		Assert(timeout >= 0 && timeout <= INT_MAX);
-		cur_timeout = timeout;
-	}
-	else
-		cur_timeout = INFINITE;
-
-	/*
-	 * Construct an array of event handles for WaitforMultipleObjects().
-	 *
-	 * Note: pgwin32_signal_event should be first to ensure that it will be
-	 * reported when multiple events are set.  We want to guarantee that
-	 * pending signals are serviced.
-	 */
-	latchevent = latch->event;
-
-	events[0] = pgwin32_signal_event;
-	events[1] = latchevent;
-	numevents = 2;
-	if (wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE))
-	{
-		/* Need an event object to represent events on the socket */
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
-
-		if (wakeEvents & WL_SOCKET_READABLE)
-			flags |= FD_READ;
-		if (wakeEvents & WL_SOCKET_WRITEABLE)
-			flags |= FD_WRITE;
-
-		sockevent = WSACreateEvent();
-		if (sockevent == WSA_INVALID_EVENT)
-			elog(ERROR, "failed to create event for socket: error code %u",
-				 WSAGetLastError());
-		if (WSAEventSelect(sock, sockevent, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
-
-		events[numevents++] = sockevent;
-	}
-	if (wakeEvents & WL_POSTMASTER_DEATH)
-	{
-		pmdeath_eventno = numevents;
-		events[numevents++] = PostmasterHandle;
-	}
-
-	/* Ensure that signals are serviced even if latch is already set */
-	pgwin32_dispatch_queued_signals();
-
-	do
-	{
-		/*
-		 * The comment in unix_latch.c's equivalent to this applies here as
-		 * well. At least after mentally replacing self-pipe with windows
-		 * event. There's no danger of overflowing, as "Setting an event that
-		 * is already set has no effect.".
-		 */
-		if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
-		{
-			result |= WL_LATCH_SET;
-
-			/*
-			 * Leave loop immediately, avoid blocking again. We don't attempt
-			 * to report any other events that might also be satisfied.
-			 */
-			break;
-		}
-
-		rc = WaitForMultipleObjects(numevents, events, FALSE, cur_timeout);
-
-		if (rc == WAIT_FAILED)
-			elog(ERROR, "WaitForMultipleObjects() failed: error code %lu",
-				 GetLastError());
-		else if (rc == WAIT_TIMEOUT)
-		{
-			result |= WL_TIMEOUT;
-		}
-		else if (rc == WAIT_OBJECT_0)
-		{
-			/* Service newly-arrived signals */
-			pgwin32_dispatch_queued_signals();
-		}
-		else if (rc == WAIT_OBJECT_0 + 1)
-		{
-			/*
-			 * Reset the event.  We'll re-check the, potentially, set latch on
-			 * next iteration of loop, but let's not waste the cycles to
-			 * update cur_timeout below.
-			 */
-			if (!ResetEvent(latchevent))
-				elog(ERROR, "ResetEvent failed: error code %lu", GetLastError());
-
-			continue;
-		}
-		else if ((wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) &&
-				 rc == WAIT_OBJECT_0 + 2)		/* socket is at event slot 2 */
-		{
-			WSANETWORKEVENTS resEvents;
-
-			ZeroMemory(&resEvents, sizeof(resEvents));
-			if (WSAEnumNetworkEvents(sock, sockevent, &resEvents) != 0)
-				elog(ERROR, "failed to enumerate network events: error code %u",
-					 WSAGetLastError());
-			if ((wakeEvents & WL_SOCKET_READABLE) &&
-				(resEvents.lNetworkEvents & FD_READ))
-			{
-				result |= WL_SOCKET_READABLE;
-			}
-			if ((wakeEvents & WL_SOCKET_WRITEABLE) &&
-				(resEvents.lNetworkEvents & FD_WRITE))
-			{
-				result |= WL_SOCKET_WRITEABLE;
-			}
-			if (resEvents.lNetworkEvents & FD_CLOSE)
-			{
-				if (wakeEvents & WL_SOCKET_READABLE)
-					result |= WL_SOCKET_READABLE;
-				if (wakeEvents & WL_SOCKET_WRITEABLE)
-					result |= WL_SOCKET_WRITEABLE;
-			}
-		}
-		else if ((wakeEvents & WL_POSTMASTER_DEATH) &&
-				 rc == WAIT_OBJECT_0 + pmdeath_eventno)
-		{
-			/*
-			 * Postmaster apparently died.  Since the consequences of falsely
-			 * returning WL_POSTMASTER_DEATH could be pretty unpleasant, we
-			 * take the trouble to positively verify this with
-			 * PostmasterIsAlive(), even though there is no known reason to
-			 * think that the event could be falsely set on Windows.
-			 */
-			if (!PostmasterIsAlive())
-				result |= WL_POSTMASTER_DEATH;
-		}
-		else
-			elog(ERROR, "unexpected return code from WaitForMultipleObjects(): %lu", rc);
-
-		/* If we're not done, update cur_timeout for next iteration */
-		if (result == 0 && (wakeEvents & WL_TIMEOUT))
-		{
-			INSTR_TIME_SET_CURRENT(cur_time);
-			INSTR_TIME_SUBTRACT(cur_time, start_time);
-			cur_timeout = timeout - (long) INSTR_TIME_GET_MILLISEC(cur_time);
-			if (cur_timeout <= 0)
-			{
-				/* Timeout has expired, no need to continue looping */
-				result |= WL_TIMEOUT;
-			}
-		}
-	} while (result == 0);
-
-	/* Clean up the event object we created for the socket */
-	if (sockevent != WSA_INVALID_EVENT)
-	{
-		WSAEventSelect(sock, NULL, 0);
-		WSACloseEvent(sockevent);
-	}
-
-	return result;
-}
-
-/*
- * The comments above the unix implementation (unix_latch.c) of this function
- * apply here as well.
- */
-void
-SetLatch(volatile Latch *latch)
-{
-	HANDLE		handle;
-
-	/*
-	 * The memory barrier has be to be placed here to ensure that any flag
-	 * variables possibly changed by this process have been flushed to main
-	 * memory, before we check/set is_set.
-	 */
-	pg_memory_barrier();
-
-	/* Quick exit if already set */
-	if (latch->is_set)
-		return;
-
-	latch->is_set = true;
-
-	/*
-	 * See if anyone's waiting for the latch. It can be the current process if
-	 * we're in a signal handler.
-	 *
-	 * Use a local variable here just in case somebody changes the event field
-	 * concurrently (which really should not happen).
-	 */
-	handle = latch->event;
-	if (handle)
-	{
-		SetEvent(handle);
-
-		/*
-		 * Note that we silently ignore any errors. We might be in a signal
-		 * handler or other critical path where it's not safe to call elog().
-		 */
-	}
-}
-
-void
-ResetLatch(volatile Latch *latch)
-{
-	/* Only the owner should reset the latch */
-	Assert(latch->owner_pid == MyProcPid);
-
-	latch->is_set = false;
-
-	/*
-	 * Ensure that the write to is_set gets flushed to main memory before we
-	 * examine any flag variables.  Otherwise a concurrent SetLatch might
-	 * falsely conclude that it needn't signal us, even though we have missed
-	 * seeing some flag updates that SetLatch was supposed to inform us of.
-	 */
-	pg_memory_barrier();
-}
diff --git a/src/backend/storage/ipc/Makefile b/src/backend/storage/ipc/Makefile
index d8eb742..8a55392 100644
--- a/src/backend/storage/ipc/Makefile
+++ b/src/backend/storage/ipc/Makefile
@@ -8,7 +8,8 @@ subdir = src/backend/storage/ipc
 top_builddir = ../../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = dsm_impl.o dsm.o ipc.o ipci.o pmsignal.o procarray.o procsignal.o \
-	shmem.o shmqueue.o shm_mq.o shm_toc.o sinval.o sinvaladt.o standby.o
+OBJS = dsm_impl.o dsm.o ipc.o ipci.o latch.o pmsignal.o procarray.o \
+	procsignal.o  shmem.o shmqueue.o shm_mq.o shm_toc.o sinval.o \
+	sinvaladt.o standby.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/port/unix_latch.c b/src/backend/storage/ipc/latch.c
similarity index 74%
rename from src/backend/port/unix_latch.c
rename to src/backend/storage/ipc/latch.c
index 104401d..143d2a1 100644
--- a/src/backend/port/unix_latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * unix_latch.c
+ * latch.c
  *	  Routines for inter-process latches
  *
  * The Unix implementation uses the so-called self-pipe trick to overcome
@@ -22,11 +22,14 @@
  * process, SIGUSR1 is sent and the signal handler in the waiting process
  * writes the byte to the pipe on behalf of the signaling process.
  *
+ * The Windows implementation uses Windows events that are inherited by
+ * all postmaster child processes.
+ *
  * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/backend/port/unix_latch.c
+ *	  src/backend/storage/ipc/latch.c
  *
  *-------------------------------------------------------------------------
  */
@@ -62,16 +65,19 @@
  * useful to manually specify the used primitive.  If desired, just add a
  * define somewhere before this block.
  */
-#if defined(LATCH_USE_POLL) || defined(LATCH_USE_SELECT)
+#if defined(LATCH_USE_POLL) || defined(LATCH_USE_SELECT) || defined(LATCH_USE_WIN32)
 /* don't overwrite manual choice */
 #elif defined(HAVE_POLL)
 #define LATCH_USE_POLL
 #elif HAVE_SYS_SELECT_H
 #define LATCH_USE_SELECT
+#elif WIN32
+#define LATCH_USE_WIN32
 #else
 #error "no latch implementation available"
 #endif
 
+#ifndef WIN32
 /* Are we currently in WaitLatch? The signal handler would like to know. */
 static volatile sig_atomic_t waiting = false;
 
@@ -82,6 +88,7 @@ static int	selfpipe_writefd = -1;
 /* Private function prototypes */
 static void sendSelfPipeByte(void);
 static void drainSelfPipe(void);
+#endif   /* WIN32 */
 
 
 /*
@@ -93,6 +100,7 @@ static void drainSelfPipe(void);
 void
 InitializeLatchSupport(void)
 {
+#ifndef WIN32
 	int			pipefd[2];
 
 	Assert(selfpipe_readfd == -1);
@@ -113,6 +121,9 @@ InitializeLatchSupport(void)
 
 	selfpipe_readfd = pipefd[0];
 	selfpipe_writefd = pipefd[1];
+#else
+	/* currently, nothing to do here for Windows */
+#endif
 }
 
 /*
@@ -121,12 +132,18 @@ InitializeLatchSupport(void)
 void
 InitLatch(volatile Latch *latch)
 {
-	/* Assert InitializeLatchSupport has been called in this process */
-	Assert(selfpipe_readfd >= 0);
-
 	latch->is_set = false;
 	latch->owner_pid = MyProcPid;
 	latch->is_shared = false;
+
+#ifndef WIN32
+	/* Assert InitializeLatchSupport has been called in this process */
+	Assert(selfpipe_readfd >= 0);
+#else
+	latch->event = CreateEvent(NULL, TRUE, FALSE, NULL);
+	if (latch->event == NULL)
+		elog(ERROR, "CreateEvent failed: error code %lu", GetLastError());
+#endif   /* WIN32 */
 }
 
 /*
@@ -143,6 +160,21 @@ InitLatch(volatile Latch *latch)
 void
 InitSharedLatch(volatile Latch *latch)
 {
+#ifdef WIN32
+	SECURITY_ATTRIBUTES sa;
+
+	/*
+	 * Set up security attributes to specify that the events are inherited.
+	 */
+	ZeroMemory(&sa, sizeof(sa));
+	sa.nLength = sizeof(sa);
+	sa.bInheritHandle = TRUE;
+
+	latch->event = CreateEvent(&sa, TRUE, FALSE, NULL);
+	if (latch->event == NULL)
+		elog(ERROR, "CreateEvent failed: error code %lu", GetLastError());
+#endif
+
 	latch->is_set = false;
 	latch->owner_pid = 0;
 	latch->is_shared = true;
@@ -164,12 +196,14 @@ InitSharedLatch(volatile Latch *latch)
 void
 OwnLatch(volatile Latch *latch)
 {
-	/* Assert InitializeLatchSupport has been called in this process */
-	Assert(selfpipe_readfd >= 0);
-
+	/* Sanity checks */
 	Assert(latch->is_shared);
 
-	/* sanity check */
+#ifndef WIN32
+	/* Assert InitializeLatchSupport has been called in this process */
+	Assert(selfpipe_readfd >= 0);
+#endif
+
 	if (latch->owner_pid != 0)
 		elog(ERROR, "latch already owned");
 
@@ -221,6 +255,7 @@ WaitLatch(volatile Latch *latch, int wakeEvents, long timeout)
  * returning the socket as readable/writable or both, depending on
  * WL_SOCKET_READABLE/WL_SOCKET_WRITEABLE being specified.
  */
+#ifndef WIN32
 int
 WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 				  long timeout)
@@ -551,6 +586,198 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 
 	return result;
 }
+#else
+int
+WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
+				  long timeout)
+{
+	DWORD		rc;
+	instr_time	start_time,
+				cur_time;
+	long		cur_timeout;
+	HANDLE		events[4];
+	HANDLE		latchevent;
+	HANDLE		sockevent = WSA_INVALID_EVENT;
+	int			numevents;
+	int			result = 0;
+	int			pmdeath_eventno = 0;
+
+	Assert(wakeEvents != 0);	/* must have at least one wake event */
+
+	/* waiting for socket readiness without a socket indicates a bug */
+	if (sock == PGINVALID_SOCKET &&
+		(wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) != 0)
+		elog(ERROR, "cannot wait on socket events without a socket");
+
+	if ((wakeEvents & WL_LATCH_SET) && latch->owner_pid != MyProcPid)
+		elog(ERROR, "cannot wait on a latch owned by another process");
+
+	/*
+	 * Initialize timeout if requested.  We must record the current time so
+	 * that we can determine the remaining timeout if WaitForMultipleObjects
+	 * is interrupted.
+	 */
+	if (wakeEvents & WL_TIMEOUT)
+	{
+		INSTR_TIME_SET_CURRENT(start_time);
+		Assert(timeout >= 0 && timeout <= INT_MAX);
+		cur_timeout = timeout;
+	}
+	else
+		cur_timeout = INFINITE;
+
+	/*
+	 * Construct an array of event handles for WaitforMultipleObjects().
+	 *
+	 * Note: pgwin32_signal_event should be first to ensure that it will be
+	 * reported when multiple events are set.  We want to guarantee that
+	 * pending signals are serviced.
+	 */
+	latchevent = latch->event;
+
+	events[0] = pgwin32_signal_event;
+	events[1] = latchevent;
+	numevents = 2;
+	if (wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE))
+	{
+		/* Need an event object to represent events on the socket */
+		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+
+		if (wakeEvents & WL_SOCKET_READABLE)
+			flags |= FD_READ;
+		if (wakeEvents & WL_SOCKET_WRITEABLE)
+			flags |= FD_WRITE;
+
+		sockevent = WSACreateEvent();
+		if (sockevent == WSA_INVALID_EVENT)
+			elog(ERROR, "failed to create event for socket: error code %u",
+				 WSAGetLastError());
+		if (WSAEventSelect(sock, sockevent, flags) != 0)
+			elog(ERROR, "failed to set up event for socket: error code %u",
+				 WSAGetLastError());
+
+		events[numevents++] = sockevent;
+	}
+	if (wakeEvents & WL_POSTMASTER_DEATH)
+	{
+		pmdeath_eventno = numevents;
+		events[numevents++] = PostmasterHandle;
+	}
+
+	/* Ensure that signals are serviced even if latch is already set */
+	pgwin32_dispatch_queued_signals();
+
+	do
+	{
+		/*
+		 * Reset the event, and check if the latch is set already. If someone
+		 * sets the latch between this and the WaitForMultipleObjects() call
+		 * below, the setter will set the event and WaitForMultipleObjects()
+		 * will return immediately.
+		 */
+		if (!ResetEvent(latchevent))
+			elog(ERROR, "ResetEvent failed: error code %lu", GetLastError());
+
+		if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
+		{
+			result |= WL_LATCH_SET;
+
+			/*
+			 * Leave loop immediately, avoid blocking again. We don't attempt
+			 * to report any other events that might also be satisfied.
+			 */
+			break;
+		}
+
+		rc = WaitForMultipleObjects(numevents, events, FALSE, cur_timeout);
+
+		if (rc == WAIT_FAILED)
+			elog(ERROR, "WaitForMultipleObjects() failed: error code %lu",
+				 GetLastError());
+		else if (rc == WAIT_TIMEOUT)
+		{
+			result |= WL_TIMEOUT;
+		}
+		else if (rc == WAIT_OBJECT_0)
+		{
+			/* Service newly-arrived signals */
+			pgwin32_dispatch_queued_signals();
+		}
+		else if (rc == WAIT_OBJECT_0 + 1)
+		{
+			/*
+			 * Latch is set.  We'll handle that on next iteration of loop, but
+			 * let's not waste the cycles to update cur_timeout below.
+			 */
+			continue;
+		}
+		else if ((wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) &&
+				 rc == WAIT_OBJECT_0 + 2)		/* socket is at event slot 2 */
+		{
+			WSANETWORKEVENTS resEvents;
+
+			ZeroMemory(&resEvents, sizeof(resEvents));
+			if (WSAEnumNetworkEvents(sock, sockevent, &resEvents) != 0)
+				elog(ERROR, "failed to enumerate network events: error code %u",
+					 WSAGetLastError());
+			if ((wakeEvents & WL_SOCKET_READABLE) &&
+				(resEvents.lNetworkEvents & FD_READ))
+			{
+				result |= WL_SOCKET_READABLE;
+			}
+			if ((wakeEvents & WL_SOCKET_WRITEABLE) &&
+				(resEvents.lNetworkEvents & FD_WRITE))
+			{
+				result |= WL_SOCKET_WRITEABLE;
+			}
+			if (resEvents.lNetworkEvents & FD_CLOSE)
+			{
+				if (wakeEvents & WL_SOCKET_READABLE)
+					result |= WL_SOCKET_READABLE;
+				if (wakeEvents & WL_SOCKET_WRITEABLE)
+					result |= WL_SOCKET_WRITEABLE;
+			}
+		}
+		else if ((wakeEvents & WL_POSTMASTER_DEATH) &&
+				 rc == WAIT_OBJECT_0 + pmdeath_eventno)
+		{
+			/*
+			 * Postmaster apparently died.  Since the consequences of falsely
+			 * returning WL_POSTMASTER_DEATH could be pretty unpleasant, we
+			 * take the trouble to positively verify this with
+			 * PostmasterIsAlive(), even though there is no known reason to
+			 * think that the event could be falsely set on Windows.
+			 */
+			if (!PostmasterIsAlive())
+				result |= WL_POSTMASTER_DEATH;
+		}
+		else
+			elog(ERROR, "unexpected return code from WaitForMultipleObjects(): %lu", rc);
+
+		/* If we're not done, update cur_timeout for next iteration */
+		if (result == 0 && (wakeEvents & WL_TIMEOUT))
+		{
+			INSTR_TIME_SET_CURRENT(cur_time);
+			INSTR_TIME_SUBTRACT(cur_time, start_time);
+			cur_timeout = timeout - (long) INSTR_TIME_GET_MILLISEC(cur_time);
+			if (cur_timeout <= 0)
+			{
+				/* Timeout has expired, no need to continue looping */
+				result |= WL_TIMEOUT;
+			}
+		}
+	} while (result == 0);
+
+	/* Clean up the event object we created for the socket */
+	if (sockevent != WSA_INVALID_EVENT)
+	{
+		WSAEventSelect(sock, NULL, 0);
+		WSACloseEvent(sockevent);
+	}
+
+	return result;
+}
+#endif
 
 /*
  * Sets a latch and wakes up anyone waiting on it.
@@ -567,7 +794,11 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 void
 SetLatch(volatile Latch *latch)
 {
+#ifndef WIN32
 	pid_t		owner_pid;
+#else
+	HANDLE		handle;
+#endif
 
 	/*
 	 * The memory barrier has be to be placed here to ensure that any flag
@@ -582,6 +813,8 @@ SetLatch(volatile Latch *latch)
 
 	latch->is_set = true;
 
+#ifndef WIN32
+
 	/*
 	 * See if anyone's waiting for the latch. It can be the current process if
 	 * we're in a signal handler. We use the self-pipe to wake up the select()
@@ -613,6 +846,27 @@ SetLatch(volatile Latch *latch)
 	}
 	else
 		kill(owner_pid, SIGUSR1);
+#else
+
+	/*
+	 * See if anyone's waiting for the latch. It can be the current process if
+	 * we're in a signal handler.
+	 *
+	 * Use a local variable here just in case somebody changes the event field
+	 * concurrently (which really should not happen).
+	 */
+	handle = latch->event;
+	if (handle)
+	{
+		SetEvent(handle);
+
+		/*
+		 * Note that we silently ignore any errors. We might be in a signal
+		 * handler or other critical path where it's not safe to call elog().
+		 */
+	}
+#endif
+
 }
 
 /*
@@ -646,14 +900,17 @@ ResetLatch(volatile Latch *latch)
  * NB: when calling this in a signal handler, be sure to save and restore
  * errno around it.
  */
+#ifndef WIN32
 void
 latch_sigusr1_handler(void)
 {
 	if (waiting)
 		sendSelfPipeByte();
 }
+#endif   /* !WIN32 */
 
 /* Send one byte to the self-pipe, to wake up WaitLatch */
+#ifndef WIN32
 static void
 sendSelfPipeByte(void)
 {
@@ -683,6 +940,7 @@ retry:
 		return;
 	}
 }
+#endif   /* !WIN32 */
 
 /*
  * Read all available data from the self-pipe
@@ -691,6 +949,7 @@ retry:
  * return, it must reset that flag first (though ideally, this will never
  * happen).
  */
+#ifndef WIN32
 static void
 drainSelfPipe(void)
 {
@@ -729,3 +988,4 @@ drainSelfPipe(void)
 		/* else buffer wasn't big enough, so read again */
 	}
 }
+#endif   /* !WIN32 */
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index e77491e..2719498 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -36,7 +36,7 @@
  * WaitLatch includes a provision for timeouts (which should be avoided
  * when possible, as they incur extra overhead) and a provision for
  * postmaster child processes to wake up immediately on postmaster death.
- * See unix_latch.c for detailed specifications for the exported functions.
+ * See latch.c for detailed specifications for the exported functions.
  *
  * The correct pattern to wait for event(s) is:
  *
diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index 949077a..b6e4577 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -134,8 +134,6 @@ sub mkvcbuild
 		'src/backend/port/win32_sema.c');
 	$postgres->ReplaceFile('src/backend/port/pg_shmem.c',
 		'src/backend/port/win32_shmem.c');
-	$postgres->ReplaceFile('src/backend/port/pg_latch.c',
-		'src/backend/port/win32_latch.c');
 	$postgres->AddFiles('src/port',   @pgportfiles);
 	$postgres->AddFiles('src/common', @pgcommonbkndfiles);
 	$postgres->AddDir('src/timezone');
-- 
2.7.0.229.g701fa7f

>From 35d645265abd0ff6d03ef246bc30bf3edc268439 Mon Sep 17 00:00:00 2001
From: Andres Freund <and...@anarazel.de>
Date: Fri, 18 Mar 2016 00:54:41 -0700
Subject: [PATCH 5/5] WIP: Introduce new WaitEventSet API.

Commit ac1d794 ("Make idle backends exit if the postmaster dies.")
introduced a regression on, at least, large linux systems. Constantly
adding the same postmaster_alive_fds to the OSs internal datastructures
for implementing poll/select can cause significant contention; leading
to a performance regression of nearly 3x in one example.

This can be avoided by using e.g. linux' epoll, which avoids having to
add/remove file descriptors to the wait datastructures at a high rate.
Unfortunately the current latch interface makes it hard to allocate any
persistent per-backend resources.

Replace, with a backward compatibility layer, WaitLatchOrSocket with a
new WaitEventSet API. Users can allocate such a Set across multiple
calls, and add more than one filedescriptor to wait on. The latter has
been added because there's upcoming postgres features where that will be
helpful.

In addition to the previously existing poll(2), select(2),
WaitForMultipleObjects() implementations also provide an epoll_wait(2)
based implementation to address the aforementioned performance
problem. Epoll is only available on linux, but that is the most likely
OS for machines large enough (four sockets) to reproduce the problem.

Todo:
* Testing, especially windows
* Documentation

Reported-By: Dmitry Vasilyev
Discussion: CAB-SwXZh44_2ybvS5Z67p_CDz=XFn4hNAD=cnmef+qqkxwf...@mail.gmail.com
    20160114143931.gg10...@awork2.anarazel.de
---
 configure                         |    2 +-
 configure.in                      |    2 +-
 src/backend/libpq/be-secure.c     |   24 +-
 src/backend/libpq/pqcomm.c        |    4 +
 src/backend/storage/ipc/latch.c   | 1540 +++++++++++++++++++++++++------------
 src/backend/utils/init/miscinit.c |    8 +
 src/include/libpq/libpq.h         |    3 +
 src/include/pg_config.h.in        |    3 +
 src/include/storage/latch.h       |   14 +
 src/tools/pgindent/typedefs.list  |    2 +
 10 files changed, 1084 insertions(+), 518 deletions(-)

diff --git a/configure b/configure
index c10d954..24655dc 100755
--- a/configure
+++ b/configure
@@ -10193,7 +10193,7 @@ fi
 ## Header files
 ##
 
-for ac_header in atomic.h crypt.h dld.h fp_class.h getopt.h ieeefp.h ifaddrs.h langinfo.h mbarrier.h poll.h pwd.h sys/ioctl.h sys/ipc.h sys/poll.h sys/pstat.h sys/resource.h sys/select.h sys/sem.h sys/shm.h sys/socket.h sys/sockio.h sys/tas.h sys/time.h sys/un.h termios.h ucred.h utime.h wchar.h wctype.h
+for ac_header in atomic.h crypt.h dld.h fp_class.h getopt.h ieeefp.h ifaddrs.h langinfo.h mbarrier.h poll.h pwd.h sys/epoll.h sys/ioctl.h sys/ipc.h sys/poll.h sys/pstat.h sys/resource.h sys/select.h sys/sem.h sys/shm.h sys/socket.h sys/sockio.h sys/tas.h sys/time.h sys/un.h termios.h ucred.h utime.h wchar.h wctype.h
 do :
   as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh`
 ac_fn_c_check_header_mongrel "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default"
diff --git a/configure.in b/configure.in
index 47d0f58..c564a76 100644
--- a/configure.in
+++ b/configure.in
@@ -1183,7 +1183,7 @@ AC_SUBST(UUID_LIBS)
 ##
 
 dnl sys/socket.h is required by AC_FUNC_ACCEPT_ARGTYPES
-AC_CHECK_HEADERS([atomic.h crypt.h dld.h fp_class.h getopt.h ieeefp.h ifaddrs.h langinfo.h mbarrier.h poll.h pwd.h sys/ioctl.h sys/ipc.h sys/poll.h sys/pstat.h sys/resource.h sys/select.h sys/sem.h sys/shm.h sys/socket.h sys/sockio.h sys/tas.h sys/time.h sys/un.h termios.h ucred.h utime.h wchar.h wctype.h])
+AC_CHECK_HEADERS([atomic.h crypt.h dld.h fp_class.h getopt.h ieeefp.h ifaddrs.h langinfo.h mbarrier.h poll.h pwd.h sys/epoll.h sys/ioctl.h sys/ipc.h sys/poll.h sys/pstat.h sys/resource.h sys/select.h sys/sem.h sys/shm.h sys/socket.h sys/sockio.h sys/tas.h sys/time.h sys/un.h termios.h ucred.h utime.h wchar.h wctype.h])
 
 # On BSD, test for net/if.h will fail unless sys/socket.h
 # is included first.
diff --git a/src/backend/libpq/be-secure.c b/src/backend/libpq/be-secure.c
index ac709d1..c396811 100644
--- a/src/backend/libpq/be-secure.c
+++ b/src/backend/libpq/be-secure.c
@@ -140,13 +140,13 @@ retry:
 	/* In blocking mode, wait until the socket is ready */
 	if (n < 0 && !port->noblock && (errno == EWOULDBLOCK || errno == EAGAIN))
 	{
-		int			w;
+		WaitEvent   event;
 
 		Assert(waitfor);
 
-		w = WaitLatchOrSocket(MyLatch,
-							  WL_LATCH_SET | WL_POSTMASTER_DEATH | waitfor,
-							  port->sock, 0);
+		ModifyWaitEvent(FeBeWaitSet, 0, waitfor, NULL);
+
+		WaitEventSetWait(FeBeWaitSet, 0 /* no timeout */, &event, 1);
 
 		/*
 		 * If the postmaster has died, it's not safe to continue running,
@@ -165,13 +165,13 @@ retry:
 		 * cycles checking for this very rare condition, and this should cause
 		 * us to exit quickly in most cases.)
 		 */
-		if (w & WL_POSTMASTER_DEATH)
+		if (event.events & WL_POSTMASTER_DEATH)
 			ereport(FATAL,
 					(errcode(ERRCODE_ADMIN_SHUTDOWN),
 					errmsg("terminating connection due to unexpected postmaster exit")));
 
 		/* Handle interrupt. */
-		if (w & WL_LATCH_SET)
+		if (event.events & WL_LATCH_SET)
 		{
 			ResetLatch(MyLatch);
 			ProcessClientReadInterrupt(true);
@@ -241,22 +241,22 @@ retry:
 
 	if (n < 0 && !port->noblock && (errno == EWOULDBLOCK || errno == EAGAIN))
 	{
-		int			w;
+		WaitEvent   event;
 
 		Assert(waitfor);
 
-		w = WaitLatchOrSocket(MyLatch,
-							  WL_LATCH_SET | WL_POSTMASTER_DEATH | waitfor,
-							  port->sock, 0);
+		ModifyWaitEvent(FeBeWaitSet, 0, waitfor, NULL);
+
+		WaitEventSetWait(FeBeWaitSet, 0 /* no timeout */, &event, 1);
 
 		/* See comments in secure_read. */
-		if (w & WL_POSTMASTER_DEATH)
+		if (event.events & WL_POSTMASTER_DEATH)
 			ereport(FATAL,
 					(errcode(ERRCODE_ADMIN_SHUTDOWN),
 					errmsg("terminating connection due to unexpected postmaster exit")));
 
 		/* Handle interrupt. */
-		if (w & WL_LATCH_SET)
+		if (event.events & WL_LATCH_SET)
 		{
 			ResetLatch(MyLatch);
 			ProcessClientWriteInterrupt(true);
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 71473db..c81abaf 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -201,6 +201,10 @@ pq_init(void)
 				(errmsg("could not set socket to nonblocking mode: %m")));
 #endif
 
+	FeBeWaitSet = CreateWaitEventSet(TopMemoryContext, 3);
+	AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock, NULL);
+	AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch);
+	AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL);
 }
 
 /* --------------------------------
diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c
index 143d2a1..0759398 100644
--- a/src/backend/storage/ipc/latch.c
+++ b/src/backend/storage/ipc/latch.c
@@ -41,6 +41,9 @@
 #include <unistd.h>
 #include <sys/time.h>
 #include <sys/types.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#endif
 #ifdef HAVE_POLL_H
 #include <poll.h>
 #endif
@@ -65,18 +68,38 @@
  * useful to manually specify the used primitive.  If desired, just add a
  * define somewhere before this block.
  */
-#if defined(LATCH_USE_POLL) || defined(LATCH_USE_SELECT) || defined(LATCH_USE_WIN32)
+#if defined(WAIT_USE_EPOLL) || defined(WAIT_USE_POLL) || defined(WAIT_USE_SELECT) || defined(WAIT_USE_WIN32)
 /* don't overwrite manual choice */
+#elif defined(HAVE_SYS_EPOLL_H)
+#define WAIT_USE_EPOLL
 #elif defined(HAVE_POLL)
-#define LATCH_USE_POLL
+#define WAIT_USE_POLL
 #elif HAVE_SYS_SELECT_H
-#define LATCH_USE_SELECT
+#define WAIT_USE_SELECT
 #elif WIN32
-#define LATCH_USE_WIN32
+#define WAIT_USE_WIN32
 #else
-#error "no latch implementation available"
+#error "no wait set implementation available"
 #endif
 
+typedef struct WaitEventSet
+{
+	int			nevents;
+	int			nevents_space;
+	Latch	   *latch;
+	int			latch_pos;
+	WaitEvent  *events;
+#if defined(WAIT_USE_EPOLL)
+	struct epoll_event *epoll_ret_events;
+	int			epoll_fd;
+#elif defined(WAIT_USE_POLL)
+	struct pollfd *pollfds;
+#endif
+#if defined(WAIT_USE_WIN32)
+	HANDLE	   *handles;
+#endif
+} WaitEventSet;
+
 #ifndef WIN32
 /* Are we currently in WaitLatch? The signal handler would like to know. */
 static volatile sig_atomic_t waiting = false;
@@ -90,6 +113,16 @@ static void sendSelfPipeByte(void);
 static void drainSelfPipe(void);
 #endif   /* WIN32 */
 
+#if defined(WAIT_USE_EPOLL)
+static void WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action);
+#elif defined(WAIT_USE_POLL)
+static void WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event);
+#elif defined(WAIT_USE_WIN32)
+static void WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event);
+#endif
+
+static int WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
+								 WaitEvent *occurred_events, int nevents);
 
 /*
  * Initialize the process-local latch infrastructure.
@@ -255,529 +288,56 @@ WaitLatch(volatile Latch *latch, int wakeEvents, long timeout)
  * returning the socket as readable/writable or both, depending on
  * WL_SOCKET_READABLE/WL_SOCKET_WRITEABLE being specified.
  */
-#ifndef WIN32
 int
 WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
 				  long timeout)
 {
-	int			result = 0;
+	int			ret = 0;
 	int			rc;
-	instr_time	start_time,
-				cur_time;
-	long		cur_timeout;
+	WaitEvent	event;
+	WaitEventSet *set = CreateWaitEventSet(CurrentMemoryContext, 3);
 
-#if defined(LATCH_USE_POLL)
-	struct pollfd pfds[3];
-	int			nfds;
-#elif defined(LATCH_USE_SELECT)
-	struct timeval tv,
-			   *tvp;
-	fd_set		input_mask;
-	fd_set		output_mask;
-	int			hifd;
-#endif
-
-	Assert(wakeEvents != 0);	/* must have at least one wake event */
+	if (wakeEvents & WL_TIMEOUT)
+		Assert(timeout >= 0);
+	else
+		timeout = -1;
 
 	/* waiting for socket readiness without a socket indicates a bug */
 	if (sock == PGINVALID_SOCKET &&
 		(wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) != 0)
 		elog(ERROR, "cannot wait on socket event without a socket");
 
-	if ((wakeEvents & WL_LATCH_SET) && latch->owner_pid != MyProcPid)
-		elog(ERROR, "cannot wait on a latch owned by another process");
+	if (wakeEvents & WL_LATCH_SET)
+		AddWaitEventToSet(set, WL_LATCH_SET, PGINVALID_SOCKET,
+						  (Latch *) latch);
 
-	/*
-	 * Initialize timeout if requested.  We must record the current time so
-	 * that we can determine the remaining timeout if the poll() or select()
-	 * is interrupted.  (On some platforms, select() will update the contents
-	 * of "tv" for us, but unfortunately we can't rely on that.)
-	 */
-	if (wakeEvents & WL_TIMEOUT)
-	{
-		INSTR_TIME_SET_CURRENT(start_time);
-		Assert(timeout >= 0 && timeout <= INT_MAX);
-		cur_timeout = timeout;
+	if (wakeEvents & WL_POSTMASTER_DEATH)
+		AddWaitEventToSet(set, WL_POSTMASTER_DEATH, PGINVALID_SOCKET, NULL);
 
-#ifdef LATCH_USE_SELECT
-		tv.tv_sec = cur_timeout / 1000L;
-		tv.tv_usec = (cur_timeout % 1000L) * 1000L;
-		tvp = &tv;
-#endif
-	}
-	else
-	{
-		cur_timeout = -1;
-
-#ifdef LATCH_USE_SELECT
-		tvp = NULL;
-#endif
-	}
-
-	waiting = true;
-	do
-	{
-		/*
-		 * Check if the latch is set already. If so, leave loop immediately,
-		 * avoid blocking again. We don't attempt to report any other events
-		 * that might also be satisfied.
-		 *
-		 * If someone sets the latch between this and the poll()/select()
-		 * below, the setter will write a byte to the pipe (or signal us and
-		 * the signal handler will do that), and the poll()/select() will
-		 * return immediately.
-		 *
-		 * If there's a pending byte in the self pipe, we'll notice whenever
-		 * blocking. Only clearing the pipe in that case avoids having to
-		 * drain it every time WaitLatchOrSocket() is used. Should the
-		 * pipe-buffer fill up we're still ok, because the pipe is in
-		 * nonblocking mode. It's unlikely for that to happen, because the
-		 * self pipe isn't filled unless we're blocking (waiting = true), or
-		 * from inside a signal handler in latch_sigusr1_handler().
-		 *
-		 * Note: we assume that the kernel calls involved in drainSelfPipe()
-		 * and SetLatch() will provide adequate synchronization on machines
-		 * with weak memory ordering, so that we cannot miss seeing is_set if
-		 * the signal byte is already in the pipe when we drain it.
-		 */
-		if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
-		{
-			result |= WL_LATCH_SET;
-			break;
-		}
-
-		/*
-		 * Must wait ... we use the polling interface determined at the top of
-		 * this file to do so.
-		 */
-#if defined(LATCH_USE_POLL)
-		nfds = 0;
-
-		/* selfpipe is always in pfds[0] */
-		pfds[0].fd = selfpipe_readfd;
-		pfds[0].events = POLLIN;
-		pfds[0].revents = 0;
-		nfds++;
-
-		if (wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE))
-		{
-			/* socket, if used, is always in pfds[1] */
-			pfds[1].fd = sock;
-			pfds[1].events = 0;
-			if (wakeEvents & WL_SOCKET_READABLE)
-				pfds[1].events |= POLLIN;
-			if (wakeEvents & WL_SOCKET_WRITEABLE)
-				pfds[1].events |= POLLOUT;
-			pfds[1].revents = 0;
-			nfds++;
-		}
-
-		if (wakeEvents & WL_POSTMASTER_DEATH)
-		{
-			/* postmaster fd, if used, is always in pfds[nfds - 1] */
-			pfds[nfds].fd = postmaster_alive_fds[POSTMASTER_FD_WATCH];
-			pfds[nfds].events = POLLIN;
-			pfds[nfds].revents = 0;
-			nfds++;
-		}
-
-		/* Sleep */
-		rc = poll(pfds, nfds, (int) cur_timeout);
-
-		/* Check return code */
-		if (rc < 0)
-		{
-			/* EINTR is okay, otherwise complain */
-			if (errno != EINTR)
-			{
-				waiting = false;
-				ereport(ERROR,
-						(errcode_for_socket_access(),
-						 errmsg("poll() failed: %m")));
-			}
-		}
-		else if (rc == 0)
-		{
-			/* timeout exceeded */
-			if (wakeEvents & WL_TIMEOUT)
-				result |= WL_TIMEOUT;
-		}
-		else
-		{
-			/* at least one event occurred, so check revents values */
-
-			if (pfds[0].revents & POLLIN)
-			{
-				/* There's data in the self-pipe, clear it. */
-				drainSelfPipe();
-			}
-
-			if ((wakeEvents & WL_SOCKET_READABLE) &&
-				(pfds[1].revents & POLLIN))
-			{
-				/* data available in socket, or EOF/error condition */
-				result |= WL_SOCKET_READABLE;
-			}
-			if ((wakeEvents & WL_SOCKET_WRITEABLE) &&
-				(pfds[1].revents & POLLOUT))
-			{
-				/* socket is writable */
-				result |= WL_SOCKET_WRITEABLE;
-			}
-			if ((wakeEvents & WL_SOCKET_WRITEABLE) &&
-				(pfds[1].revents & (POLLHUP | POLLERR | POLLNVAL)))
-			{
-				/* EOF/error condition */
-				if (wakeEvents & WL_SOCKET_READABLE)
-					result |= WL_SOCKET_READABLE;
-				if (wakeEvents & WL_SOCKET_WRITEABLE)
-					result |= WL_SOCKET_WRITEABLE;
-			}
-
-			/*
-			 * We expect a POLLHUP when the remote end is closed, but because
-			 * we don't expect the pipe to become readable or to have any
-			 * errors either, treat those cases as postmaster death, too.
-			 */
-			if ((wakeEvents & WL_POSTMASTER_DEATH) &&
-				(pfds[nfds - 1].revents & (POLLHUP | POLLIN | POLLERR | POLLNVAL)))
-			{
-				/*
-				 * According to the select(2) man page on Linux, select(2) may
-				 * spuriously return and report a file descriptor as readable,
-				 * when it's not; and presumably so can poll(2).  It's not
-				 * clear that the relevant cases would ever apply to the
-				 * postmaster pipe, but since the consequences of falsely
-				 * returning WL_POSTMASTER_DEATH could be pretty unpleasant,
-				 * we take the trouble to positively verify EOF with
-				 * PostmasterIsAlive().
-				 */
-				if (!PostmasterIsAlive())
-					result |= WL_POSTMASTER_DEATH;
-			}
-		}
-#elif defined(LATCH_USE_SELECT)
-
-		/*
-		 * On at least older linux kernels select(), in violation of POSIX,
-		 * doesn't reliably return a socket as writable if closed - but we
-		 * rely on that. So far all the known cases of this problem are on
-		 * platforms that also provide a poll() implementation without that
-		 * bug.  If we find one where that's not the case, we'll need to add a
-		 * workaround.
-		 */
-		FD_ZERO(&input_mask);
-		FD_ZERO(&output_mask);
-
-		FD_SET(selfpipe_readfd, &input_mask);
-		hifd = selfpipe_readfd;
-
-		if (wakeEvents & WL_POSTMASTER_DEATH)
-		{
-			FD_SET(postmaster_alive_fds[POSTMASTER_FD_WATCH], &input_mask);
-			if (postmaster_alive_fds[POSTMASTER_FD_WATCH] > hifd)
-				hifd = postmaster_alive_fds[POSTMASTER_FD_WATCH];
-		}
-
-		if (wakeEvents & WL_SOCKET_READABLE)
-		{
-			FD_SET(sock, &input_mask);
-			if (sock > hifd)
-				hifd = sock;
-		}
-
-		if (wakeEvents & WL_SOCKET_WRITEABLE)
-		{
-			FD_SET(sock, &output_mask);
-			if (sock > hifd)
-				hifd = sock;
-		}
-
-		/* Sleep */
-		rc = select(hifd + 1, &input_mask, &output_mask, NULL, tvp);
-
-		/* Check return code */
-		if (rc < 0)
-		{
-			/* EINTR is okay, otherwise complain */
-			if (errno != EINTR)
-			{
-				waiting = false;
-				ereport(ERROR,
-						(errcode_for_socket_access(),
-						 errmsg("select() failed: %m")));
-			}
-		}
-		else if (rc == 0)
-		{
-			/* timeout exceeded */
-			if (wakeEvents & WL_TIMEOUT)
-				result |= WL_TIMEOUT;
-		}
-		else
-		{
-			/* at least one event occurred, so check masks */
-			if (FD_ISSET(selfpipe_readfd, &input_mask))
-			{
-				/* There's data in the self-pipe, clear it. */
-				drainSelfPipe();
-			}
-			if ((wakeEvents & WL_SOCKET_READABLE) && FD_ISSET(sock, &input_mask))
-			{
-				/* data available in socket, or EOF */
-				result |= WL_SOCKET_READABLE;
-			}
-			if ((wakeEvents & WL_SOCKET_WRITEABLE) && FD_ISSET(sock, &output_mask))
-			{
-				/* socket is writable, or EOF */
-				result |= WL_SOCKET_WRITEABLE;
-			}
-			if ((wakeEvents & WL_POSTMASTER_DEATH) &&
-				FD_ISSET(postmaster_alive_fds[POSTMASTER_FD_WATCH],
-						 &input_mask))
-			{
-				/*
-				 * According to the select(2) man page on Linux, select(2) may
-				 * spuriously return and report a file descriptor as readable,
-				 * when it's not; and presumably so can poll(2).  It's not
-				 * clear that the relevant cases would ever apply to the
-				 * postmaster pipe, but since the consequences of falsely
-				 * returning WL_POSTMASTER_DEATH could be pretty unpleasant,
-				 * we take the trouble to positively verify EOF with
-				 * PostmasterIsAlive().
-				 */
-				if (!PostmasterIsAlive())
-					result |= WL_POSTMASTER_DEATH;
-			}
-		}
-#endif   /* LATCH_USE_SELECT */
-
-		/*
-		 * Check again whether latch is set, the arrival of a signal/self-byte
-		 * might be what stopped our sleep. It's not required for correctness
-		 * to signal the latch as being set (we'd just loop if there's no
-		 * other event), but it seems good to report an arrived latch asap.
-		 * This way we also don't have to compute the current timestamp again.
-		 */
-		if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
-			result |= WL_LATCH_SET;
-
-		/* If we're not done, update cur_timeout for next iteration */
-		if (result == 0 && (wakeEvents & WL_TIMEOUT))
-		{
-			INSTR_TIME_SET_CURRENT(cur_time);
-			INSTR_TIME_SUBTRACT(cur_time, start_time);
-			cur_timeout = timeout - (long) INSTR_TIME_GET_MILLISEC(cur_time);
-			if (cur_timeout <= 0)
-			{
-				/* Timeout has expired, no need to continue looping */
-				result |= WL_TIMEOUT;
-			}
-#ifdef LATCH_USE_SELECT
-			else
-			{
-				tv.tv_sec = cur_timeout / 1000L;
-				tv.tv_usec = (cur_timeout % 1000L) * 1000L;
-			}
-#endif
-		}
-	} while (result == 0);
-	waiting = false;
-
-	return result;
-}
-#else
-int
-WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
-				  long timeout)
-{
-	DWORD		rc;
-	instr_time	start_time,
-				cur_time;
-	long		cur_timeout;
-	HANDLE		events[4];
-	HANDLE		latchevent;
-	HANDLE		sockevent = WSA_INVALID_EVENT;
-	int			numevents;
-	int			result = 0;
-	int			pmdeath_eventno = 0;
-
-	Assert(wakeEvents != 0);	/* must have at least one wake event */
-
-	/* waiting for socket readiness without a socket indicates a bug */
-	if (sock == PGINVALID_SOCKET &&
-		(wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) != 0)
-		elog(ERROR, "cannot wait on socket events without a socket");
-
-	if ((wakeEvents & WL_LATCH_SET) && latch->owner_pid != MyProcPid)
-		elog(ERROR, "cannot wait on a latch owned by another process");
-
-	/*
-	 * Initialize timeout if requested.  We must record the current time so
-	 * that we can determine the remaining timeout if WaitForMultipleObjects
-	 * is interrupted.
-	 */
-	if (wakeEvents & WL_TIMEOUT)
-	{
-		INSTR_TIME_SET_CURRENT(start_time);
-		Assert(timeout >= 0 && timeout <= INT_MAX);
-		cur_timeout = timeout;
-	}
-	else
-		cur_timeout = INFINITE;
-
-	/*
-	 * Construct an array of event handles for WaitforMultipleObjects().
-	 *
-	 * Note: pgwin32_signal_event should be first to ensure that it will be
-	 * reported when multiple events are set.  We want to guarantee that
-	 * pending signals are serviced.
-	 */
-	latchevent = latch->event;
-
-	events[0] = pgwin32_signal_event;
-	events[1] = latchevent;
-	numevents = 2;
 	if (wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE))
 	{
-		/* Need an event object to represent events on the socket */
-		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+		int			ev;
 
-		if (wakeEvents & WL_SOCKET_READABLE)
-			flags |= FD_READ;
-		if (wakeEvents & WL_SOCKET_WRITEABLE)
-			flags |= FD_WRITE;
-
-		sockevent = WSACreateEvent();
-		if (sockevent == WSA_INVALID_EVENT)
-			elog(ERROR, "failed to create event for socket: error code %u",
-				 WSAGetLastError());
-		if (WSAEventSelect(sock, sockevent, flags) != 0)
-			elog(ERROR, "failed to set up event for socket: error code %u",
-				 WSAGetLastError());
-
-		events[numevents++] = sockevent;
-	}
-	if (wakeEvents & WL_POSTMASTER_DEATH)
-	{
-		pmdeath_eventno = numevents;
-		events[numevents++] = PostmasterHandle;
+		ev = wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE);
+		AddWaitEventToSet(set, ev, sock, NULL);
 	}
 
-	/* Ensure that signals are serviced even if latch is already set */
-	pgwin32_dispatch_queued_signals();
+	rc = WaitEventSetWait(set, timeout, &event, 1);
 
-	do
+	if (rc == 0)
+		ret |= WL_TIMEOUT;
+	else
 	{
-		/*
-		 * Reset the event, and check if the latch is set already. If someone
-		 * sets the latch between this and the WaitForMultipleObjects() call
-		 * below, the setter will set the event and WaitForMultipleObjects()
-		 * will return immediately.
-		 */
-		if (!ResetEvent(latchevent))
-			elog(ERROR, "ResetEvent failed: error code %lu", GetLastError());
-
-		if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
-		{
-			result |= WL_LATCH_SET;
-
-			/*
-			 * Leave loop immediately, avoid blocking again. We don't attempt
-			 * to report any other events that might also be satisfied.
-			 */
-			break;
-		}
-
-		rc = WaitForMultipleObjects(numevents, events, FALSE, cur_timeout);
-
-		if (rc == WAIT_FAILED)
-			elog(ERROR, "WaitForMultipleObjects() failed: error code %lu",
-				 GetLastError());
-		else if (rc == WAIT_TIMEOUT)
-		{
-			result |= WL_TIMEOUT;
-		}
-		else if (rc == WAIT_OBJECT_0)
-		{
-			/* Service newly-arrived signals */
-			pgwin32_dispatch_queued_signals();
-		}
-		else if (rc == WAIT_OBJECT_0 + 1)
-		{
-			/*
-			 * Latch is set.  We'll handle that on next iteration of loop, but
-			 * let's not waste the cycles to update cur_timeout below.
-			 */
-			continue;
-		}
-		else if ((wakeEvents & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)) &&
-				 rc == WAIT_OBJECT_0 + 2)		/* socket is at event slot 2 */
-		{
-			WSANETWORKEVENTS resEvents;
-
-			ZeroMemory(&resEvents, sizeof(resEvents));
-			if (WSAEnumNetworkEvents(sock, sockevent, &resEvents) != 0)
-				elog(ERROR, "failed to enumerate network events: error code %u",
-					 WSAGetLastError());
-			if ((wakeEvents & WL_SOCKET_READABLE) &&
-				(resEvents.lNetworkEvents & FD_READ))
-			{
-				result |= WL_SOCKET_READABLE;
-			}
-			if ((wakeEvents & WL_SOCKET_WRITEABLE) &&
-				(resEvents.lNetworkEvents & FD_WRITE))
-			{
-				result |= WL_SOCKET_WRITEABLE;
-			}
-			if (resEvents.lNetworkEvents & FD_CLOSE)
-			{
-				if (wakeEvents & WL_SOCKET_READABLE)
-					result |= WL_SOCKET_READABLE;
-				if (wakeEvents & WL_SOCKET_WRITEABLE)
-					result |= WL_SOCKET_WRITEABLE;
-			}
-		}
-		else if ((wakeEvents & WL_POSTMASTER_DEATH) &&
-				 rc == WAIT_OBJECT_0 + pmdeath_eventno)
-		{
-			/*
-			 * Postmaster apparently died.  Since the consequences of falsely
-			 * returning WL_POSTMASTER_DEATH could be pretty unpleasant, we
-			 * take the trouble to positively verify this with
-			 * PostmasterIsAlive(), even though there is no known reason to
-			 * think that the event could be falsely set on Windows.
-			 */
-			if (!PostmasterIsAlive())
-				result |= WL_POSTMASTER_DEATH;
-		}
-		else
-			elog(ERROR, "unexpected return code from WaitForMultipleObjects(): %lu", rc);
-
-		/* If we're not done, update cur_timeout for next iteration */
-		if (result == 0 && (wakeEvents & WL_TIMEOUT))
-		{
-			INSTR_TIME_SET_CURRENT(cur_time);
-			INSTR_TIME_SUBTRACT(cur_time, start_time);
-			cur_timeout = timeout - (long) INSTR_TIME_GET_MILLISEC(cur_time);
-			if (cur_timeout <= 0)
-			{
-				/* Timeout has expired, no need to continue looping */
-				result |= WL_TIMEOUT;
-			}
-		}
-	} while (result == 0);
-
-	/* Clean up the event object we created for the socket */
-	if (sockevent != WSA_INVALID_EVENT)
-	{
-		WSAEventSelect(sock, NULL, 0);
-		WSACloseEvent(sockevent);
+		ret |= event.events & (WL_LATCH_SET |
+							   WL_POSTMASTER_DEATH |
+							   WL_SOCKET_READABLE |
+							   WL_SOCKET_WRITEABLE);
 	}
 
-	return result;
+	FreeWaitEventSet(set);
+
+	return ret;
 }
-#endif
 
 /*
  * Sets a latch and wakes up anyone waiting on it.
@@ -891,6 +451,978 @@ ResetLatch(volatile Latch *latch)
 }
 
 /*
+ * Create a WaitEventSet with space for nevents different events to wait for.
+ *
+ * latch may be NULL.
+ */
+WaitEventSet *
+CreateWaitEventSet(MemoryContext context, int nevents)
+{
+	WaitEventSet *set;
+	char	   *data;
+	Size		sz = 0;
+
+	sz += sizeof(WaitEventSet);
+	sz += sizeof(WaitEvent) * nevents;
+
+#if defined(WAIT_USE_EPOLL)
+	sz += sizeof(struct epoll_event) * nevents;
+#elif defined(WAIT_USE_POLL)
+	sz += sizeof(struct pollfd) * nevents;
+#elif defined(WAIT_USE_WIN32)
+	/* need space for the pgwin32_signal_event */
+	sz += sizeof(HANDLE) * (nevents + 1);
+#endif
+
+	data = (char *) MemoryContextAllocZero(context, sz);
+
+	set = (WaitEventSet *) data;
+	data += sizeof(WaitEventSet);
+
+	set->events = (WaitEvent *) data;
+	data += sizeof(WaitEvent) * nevents;
+
+#if defined(WAIT_USE_EPOLL)
+	set->epoll_ret_events = (struct epoll_event *) data;
+	data += sizeof(struct epoll_event) * nevents;
+#elif defined(WAIT_USE_POLL)
+	set->pollfds = (struct pollfd *) data;
+	data += sizeof(struct pollfd) * nevents;
+#elif defined(WAIT_USE_WIN32)
+	set->handles = (HANDLE) data;
+	data += sizeof(HANDLE) * nevents;
+#endif
+
+	set->latch = NULL;
+	set->nevents_space = nevents;
+
+#if defined(WAIT_USE_EPOLL)
+	set->epoll_fd = epoll_create(nevents);
+	if (set->epoll_fd < 0)
+		elog(ERROR, "epoll_create failed: %m");
+#elif defined(WAIT_USE_WIN32)
+
+	/*
+	 * To handle signals while waiting, we need to add a win32 specific event.
+	 * We accounted for the additional event at the top of this routine. See
+	 * port/win32/signal.c for more details.
+	 *
+	 * Note: pgwin32_signal_event should be first to ensure that it will be
+	 * reported when multiple events are set.  We want to guarantee that
+	 * pending signals are serviced.
+	 */
+	set->handles[0] = pgwin32_signal_event;
+#endif
+
+	return set;
+}
+
+/*
+ * Free a previously created WaitEventSet.
+ */
+void
+FreeWaitEventSet(WaitEventSet *set)
+{
+#if defined(WAIT_USE_EPOLL)
+	close(set->epoll_fd);
+#elif defined(WAIT_USE_WIN32)
+	WaitEvent  *cur_event;
+
+	for (cur_event = set->events;
+		 cur_event < (cur_event + set->nevents);
+		 cur_event++)
+	{
+		if (cur_event->events & WL_LATCH_SET)
+		{
+			/* uses the latch's HANDLE */
+		}
+		else if (cur_event->events & WL_POSTMASTER_DEATH)
+		{
+			/* uses PostmasterHandle */
+		}
+		else
+		{
+			/* Clean up the event object we created for the socket */
+			WSAEventSelect(cur_event->fd, NULL, 0);
+			WSACloseEvent(set->handles[cur_event->pos + 1]);
+		}
+	}
+#endif
+
+	pfree(set);
+}
+
+/* ---
+ * Add an event to the set. Possible events are:
+ * - WL_LATCH_SET: Wait for the latch to be set
+ * - WL_POSTMASTER_DEATH: Wait for postmaster to die
+ * - WL_SOCKET_READABLE: Wait for socket to become readable
+ *	 can be combined in one event with WL_SOCKET_WRITEABLE
+ * - WL_SOCKET_WRITEABLE: Wait for socket to become readable
+ *	 can be combined with WL_SOCKET_READABLE
+ *
+ * Returns the offset in WaitEventSet->events (starting from 0), which can be
+ * used to modify previously added wait events.
+ */
+int
+AddWaitEventToSet(WaitEventSet *set, uint32 events, int fd, Latch *latch)
+{
+	WaitEvent  *event;
+
+	if (set->nevents_space <= set->nevents)
+		elog(ERROR, "no space for yet another event");
+
+	if (set->latch && latch)
+		elog(ERROR, "cannot wait on more than one latch");
+
+	if (latch == NULL && (events & WL_LATCH_SET))
+		elog(ERROR, "cannot wait on latch without a specified latch");
+
+	/* waiting for socket readiness without a socket indicates a bug */
+	if (fd == PGINVALID_SOCKET &&
+		(events & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE)))
+		elog(ERROR, "cannot wait on socket events without a socket");
+
+	/* FIXME: further event mask validation */
+
+	event = &set->events[set->nevents];
+	event->pos = set->nevents++;
+	event->fd = fd;
+	event->events = events;
+
+	if (events == WL_LATCH_SET)
+	{
+		set->latch = latch;
+		set->latch_pos = event->pos;
+#ifndef WIN32
+		event->fd = selfpipe_readfd;
+#endif
+	}
+	else if (events == WL_POSTMASTER_DEATH)
+	{
+#ifndef WIN32
+		event->fd = postmaster_alive_fds[POSTMASTER_FD_WATCH];
+#endif
+	}
+
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_ADD);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event);
+#elif defined(WAIT_USE_SELECT)
+	/* nothing to do */
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event);
+#endif
+
+	return event->pos;
+}
+
+/*
+ * Change the event mask and, if applicable, the associated latch of a
+ * WaitEvent.
+ */
+void
+ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch)
+{
+	WaitEvent  *event;
+
+	Assert(pos < set->nevents);
+
+	event = &set->events[pos];
+
+	/* no need to perform any checks/modifications */
+	if (events == event->events && !(event->events & WL_LATCH_SET))
+		return;
+
+	if (event->events & WL_LATCH_SET &&
+		events != event->events)
+	{
+		/* we could allow to disable latch events for a while */
+		elog(ERROR, "cannot modify latch event");
+	}
+	if (event->events & WL_POSTMASTER_DEATH)
+	{
+		elog(ERROR, "cannot modify postmaster death event");
+	}
+
+	/* FIXME: validate event mask */
+	event->events = events;
+
+	if (events == WL_LATCH_SET)
+	{
+		set->latch = latch;
+	}
+
+#if defined(WAIT_USE_EPOLL)
+	WaitEventAdjustEpoll(set, event, EPOLL_CTL_MOD);
+#elif defined(WAIT_USE_POLL)
+	WaitEventAdjustPoll(set, event);
+#elif defined(WAIT_USE_SELECT)
+	/* nothing to do */
+#elif defined(WAIT_USE_WIN32)
+	WaitEventAdjustWin32(set, event);
+#endif
+}
+
+/*
+ * Wait for events added to the set to happen, or until the timeout is
+ * reached.  At most nevents occurrent events are returned.
+ *
+ * Returns the number of events occurred, or 0 if the timeout was reached.
+ */
+int
+WaitEventSetWait(WaitEventSet *set, long timeout,
+				 WaitEvent *occurred_events, int nevents)
+{
+	int			returned_events = 0;
+	instr_time	start_time;
+	instr_time	cur_time;
+	long		cur_timeout = -1;
+
+	Assert(nevents > 0);
+
+	/*
+	 * Initialize timeout if requested.  We must record the current time so
+	 * that we can determine the remaining timeout if interrupted.
+	 */
+	if (timeout >= 0)
+	{
+		INSTR_TIME_SET_CURRENT(start_time);
+		Assert(timeout >= 0 && timeout <= INT_MAX);
+		cur_timeout = timeout;
+	}
+
+#ifndef WIN32
+	waiting = true;
+#else
+	/* Ensure that signals are serviced even if latch is already set */
+	pgwin32_dispatch_queued_signals();
+#endif
+	while (returned_events == 0)
+	{
+		int			rc;
+
+		/*
+		 * Check if the latch is set already. If so, leave the loop
+		 * immediately, avoid blocking again. We don't attempt to report any
+		 * other events that might also be satisfied.
+		 *
+		 * If someone sets the latch between this and the
+		 * WaitEventSetWaitBlock() below, the setter will write a byte to the
+		 * pipe (or signal us and the signal handler will do that), and the
+		 * readiness routine will return immediately.
+		 *
+		 * On unix, If there's a pending byte in the self pipe, we'll notice
+		 * whenever blocking. Only clearing the pipe in that case avoids
+		 * having to drain it every time WaitLatchOrSocket() is used. Should
+		 * the pipe-buffer fill up we're still ok, because the pipe is in
+		 * nonblocking mode. It's unlikely for that to happen, because the
+		 * self pipe isn't filled unless we're blocking (waiting = true), or
+		 * from inside a signal handler in latch_sigusr1_handler().
+		 *
+		 * On windows, we'll also notice if there's a pending event for the
+		 * latch when blocking, but there's no danger of anything filling up,
+		 * as "Setting an event that is already set has no effect.".
+		 *
+		 * Note: we assume that the kernel calls involved in latch management
+		 * will provide adequate synchronization on machines with weak memory
+		 * ordering, so that we cannot miss seeing is_set if a notification
+		 * has already been queued.
+		 */
+		if (set->latch && set->latch->is_set)
+		{
+			occurred_events->fd = -1;
+			occurred_events->pos = set->latch_pos;
+			occurred_events->events = WL_LATCH_SET;
+			occurred_events++;
+			returned_events++;
+
+			break;
+		}
+
+		/*
+		 * Wait for events using the readiness primitive chosen at the top of
+		 * this file. If -1 is returned, a timeout has occurred, if 0 we have
+		 * to retry, everything >= 1 is the number of returned events.
+		 */
+		rc = WaitEventSetWaitBlock(set, cur_timeout,
+								   occurred_events, nevents);
+
+		if (rc == -1)
+			break;				/* timeout occurred */
+		else
+			returned_events = rc;
+
+		/* If we're not done, update cur_timeout for next iteration */
+		if (returned_events == 0 && timeout >= 0)
+		{
+			INSTR_TIME_SET_CURRENT(cur_time);
+			INSTR_TIME_SUBTRACT(cur_time, start_time);
+			cur_timeout = timeout - (long) INSTR_TIME_GET_MILLISEC(cur_time);
+			if (cur_timeout <= 0)
+				break;
+		}
+	}
+#ifndef WIN32
+	waiting = false;
+#endif
+
+	return returned_events;
+}
+
+#if defined(WAIT_USE_EPOLL)
+/*
+ * action can be one of EPOLL_CTL_ADD | EPOLL_CTL_MOD | EPOLL_CTL_DEL
+ */
+static void
+WaitEventAdjustEpoll(WaitEventSet *set, WaitEvent *event, int action)
+{
+	struct epoll_event epoll_ev;
+	int			rc;
+
+	/* pointer to our event, returned by epoll_wait */
+	epoll_ev.data.ptr = event;
+	/* always wait for errors */
+	epoll_ev.events = EPOLLERR | EPOLLHUP;
+
+	/* prepare pollfd entry once */
+	if (event->events == WL_LATCH_SET)
+	{
+		Assert(set->latch != NULL);
+		epoll_ev.events |= EPOLLIN;
+	}
+	else if (event->events == WL_POSTMASTER_DEATH)
+	{
+		epoll_ev.events |= EPOLLIN;
+	}
+	else
+	{
+		Assert(event->fd >= 0);
+		Assert(event->events & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE));
+
+		if (event->events & WL_SOCKET_READABLE)
+			epoll_ev.events |= EPOLLIN;
+		if (event->events & WL_SOCKET_WRITEABLE)
+			epoll_ev.events |= EPOLLOUT;
+	}
+
+	/*
+	 * Even though unused, we also poss epoll_ev as the data argument if
+	 * EPOLL_CTL_DELETE is passed as action.  There used to be an epoll bug
+	 * requiring that, and acutally it makes the code simpler...
+	 */
+	rc = epoll_ctl(set->epoll_fd, action, event->fd, &epoll_ev);
+
+	if (rc < 0)
+		ereport(ERROR,
+				(errcode_for_socket_access(),
+				 errmsg("epoll_ctl() failed: %m")));
+}
+#endif
+
+#if defined(WAIT_USE_POLL)
+static void
+WaitEventAdjustPoll(WaitEventSet *set, WaitEvent *event)
+{
+	struct pollfd *pollfd = &set->pollfds[event->pos];
+
+	pollfd->revents = 0;
+	pollfd->fd = event->fd;
+
+	/* prepare pollfd entry once */
+	if (event->events == WL_LATCH_SET)
+	{
+		Assert(set->latch != NULL);
+		pollfd->events = POLLIN;
+	}
+	else if (event->events == WL_POSTMASTER_DEATH)
+	{
+		pollfd->events = POLLIN;
+	}
+	else
+	{
+		Assert(event->events & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE));
+		pollfd->events = 0;
+		if (event->events & WL_SOCKET_READABLE)
+			pollfd->events |= POLLIN;
+		if (event->events & WL_SOCKET_WRITEABLE)
+			pollfd->events |= POLLOUT;
+	}
+
+	Assert(event->fd >= 0);
+}
+#endif
+
+#if defined(WAIT_USE_WIN32)
+static void
+WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)
+{
+	HANDLE	   *handle = &set->handles[event->pos + 1];
+
+	if (event->events == WL_LATCH_SET)
+	{
+		Assert(set->latch != NULL);
+		*handle = set->latch->event;
+	}
+	else if (event->events == WL_POSTMASTER_DEATH)
+	{
+		*handle = PostmasterHandle;
+	}
+	else
+	{
+		int			flags = FD_CLOSE;	/* always check for errors/EOF */
+
+		if (event->events & WL_SOCKET_READABLE)
+			flags |= FD_READ;
+		if (event->events & WL_SOCKET_WRITEABLE)
+			flags |= FD_WRITE;
+
+		if (*handle != WSA_INVALID_EVENT)
+		{
+			*handle = WSACreateEvent();
+			if (*handle == WSA_INVALID_EVENT)
+				elog(ERROR, "failed to create event for socket: error code %u",
+					 WSAGetLastError());
+		}
+		if (WSAEventSelect(event->fd, *handle, flags) != 0)
+			elog(ERROR, "failed to set up event for socket: error code %u",
+				 WSAGetLastError());
+
+		Assert(event->fd >= 0);
+	}
+}
+#endif
+
+
+#if defined(WAIT_USE_EPOLL)
+
+/*
+ * Wait using linux' epoll_wait(2).
+ *
+ * This is the preferrable wait method, as several readiness notifications are
+ * delivered, without having to iterate through all of set->events. The return
+ * epoll_event struct contain a pointer to our events, making association
+ * easy.
+ */
+static int
+WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
+					  WaitEvent *occurred_events, int nevents)
+{
+	int			returned_events = 0;
+	int			rc;
+	WaitEvent  *cur_event;
+	struct epoll_event *cur_epoll_event;
+
+	/* Sleep */
+	rc = epoll_wait(set->epoll_fd, set->epoll_ret_events,
+					nevents, cur_timeout);
+
+	/* Check return code */
+	if (rc < 0)
+	{
+		/* EINTR is okay, otherwise complain */
+		if (errno != EINTR)
+		{
+			waiting = false;
+			ereport(ERROR,
+					(errcode_for_socket_access(),
+					 errmsg("epoll_wait() failed: %m")));
+		}
+		return 0;
+	}
+	else if (rc == 0)
+	{
+		/* timeout exceeded */
+		return -1;
+	}
+
+	/*
+	 * At least one event occurred, iterate over the returned epoll events
+	 * until they're either all processed, or we've returned all the events
+	 * the caller desired.
+	 */
+	for (cur_epoll_event = set->epoll_ret_events;
+		 cur_epoll_event < (set->epoll_ret_events + rc) &&
+		 returned_events < nevents;
+		 cur_epoll_event++)
+	{
+		/* epoll's data pointer is set to the associated WaitEvent */
+		cur_event = (WaitEvent *) cur_epoll_event->data.ptr;
+
+		occurred_events->pos = cur_event->pos;
+		occurred_events->events = 0;
+
+		if (cur_event->events == WL_LATCH_SET &&
+			cur_epoll_event->events & (EPOLLIN | EPOLLERR | EPOLLHUP))
+		{
+			/* There's data in the self-pipe, clear it. */
+			drainSelfPipe();
+
+			if (set->latch->is_set)
+			{
+				occurred_events->fd = -1;
+				occurred_events->events = WL_LATCH_SET;
+				occurred_events++;
+				returned_events++;
+			}
+		}
+		else if (cur_event->events == WL_POSTMASTER_DEATH &&
+				 cur_epoll_event->events & (EPOLLIN | EPOLLERR | EPOLLHUP))
+		{
+			/*
+			 * We expect an EPOLLHUP when the remote end is closed, but
+			 * because we don't expect the pipe to become readable or to have
+			 * any errors either, treat those cases as postmaster death, too.
+			 *
+			 * According to the select(2) man page on Linux, select(2) may
+			 * spuriously return and report a file descriptor as readable,
+			 * when it's not; and presumably so can epoll_wait(2).  It's not
+			 * clear that the relevant cases would ever apply to the
+			 * postmaster pipe, but since the consequences of falsely
+			 * returning WL_POSTMASTER_DEATH could be pretty unpleasant, we
+			 * take the trouble to positively verify EOF with
+			 * PostmasterIsAlive().
+			 */
+			if (!PostmasterIsAlive())
+			{
+				occurred_events->fd = -1;
+				occurred_events->events = WL_POSTMASTER_DEATH;
+				occurred_events++;
+				returned_events++;
+			}
+		}
+		else if (cur_event->events & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE))
+		{
+			Assert(cur_event->fd >= 0);
+
+			if ((cur_event->events & WL_SOCKET_READABLE) &&
+				(cur_epoll_event->events & (EPOLLIN | EPOLLERR | EPOLLHUP)))
+			{
+				occurred_events->events |= WL_SOCKET_READABLE;
+			}
+
+			if ((cur_event->events & WL_SOCKET_WRITEABLE) &&
+				(cur_epoll_event->events & (EPOLLOUT | EPOLLERR | EPOLLHUP)))
+			{
+				occurred_events->events |= WL_SOCKET_WRITEABLE;
+			}
+
+			if (occurred_events->events != 0)
+			{
+				occurred_events->fd = cur_event->fd;
+				occurred_events++;
+				returned_events++;
+			}
+		}
+	}
+
+	return returned_events;
+}
+
+#elif defined(WAIT_USE_POLL)
+
+/*
+ * Wait using poll(2).
+ *
+ * This allows to receive readiness notifications for several events at once,
+ * but requires iterating through all of set->pollfds.
+ */
+static inline int
+WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
+					  WaitEvent *occurred_events, int nevents)
+{
+	int			returned_events = 0;
+	int			rc;
+	WaitEvent  *cur_event;
+	struct pollfd *cur_pollfd;
+
+	/* return immediately if latch is set */
+	if (set->latch && set->latch->is_set)
+	{
+		occurred_events->fd = -1;
+		occurred_events->pos = set->latch_pos;
+		occurred_events->events = WL_LATCH_SET;
+		occurred_events++;
+		returned_events++;
+
+		return returned_events;
+	}
+
+	/* Sleep */
+	rc = poll(set->pollfds, set->nevents, (int) cur_timeout);
+
+	/* Check return code */
+	if (rc < 0)
+	{
+		/* EINTR is okay, otherwise complain */
+		if (errno != EINTR)
+		{
+			waiting = false;
+			ereport(ERROR,
+					(errcode_for_socket_access(),
+					 errmsg("poll() failed: %m")));
+		}
+		return 0;
+	}
+	else if (rc == 0)
+	{
+		/* timeout exceeded */
+		return -1;
+	}
+
+	for (cur_event = set->events, cur_pollfd = set->pollfds;
+		 cur_event < (set->events + set->nevents) &&
+		 returned_events < nevents;
+		 cur_event++, cur_pollfd++)
+	{
+		occurred_events->pos = cur_event->pos;
+		occurred_events->events = 0;
+
+		if (cur_event->events == WL_LATCH_SET &&
+			(cur_pollfd->revents & (POLLIN | POLLHUP | POLLERR | POLLNVAL)))
+		{
+			/* There's data in the self-pipe, clear it. */
+			drainSelfPipe();
+
+			if (set->latch->is_set)
+			{
+				occurred_events->fd = -1;
+				occurred_events->events = WL_LATCH_SET;
+				occurred_events++;
+				returned_events++;
+			}
+		}
+		else if (cur_event->events == WL_POSTMASTER_DEATH &&
+			 (cur_pollfd->revents & (POLLIN | POLLHUP | POLLERR | POLLNVAL)))
+		{
+			/*
+			 * We expect an POLLHUP when the remote end is closed, but because
+			 * we don't expect the pipe to become readable or to have any
+			 * errors either, treat those cases as postmaster death, too.
+			 *
+			 * According to the select(2) man page on Linux, select(2) may
+			 * spuriously return and report a file descriptor as readable,
+			 * when it's not; and presumably so can poll(2).  It's not clear
+			 * that the relevant cases would ever apply to the postmaster
+			 * pipe, but since the consequences of falsely returning
+			 * WL_POSTMASTER_DEATH could be pretty unpleasant, we take the
+			 * trouble to positively verify EOF with PostmasterIsAlive().
+			 */
+			if (!PostmasterIsAlive())
+			{
+				occurred_events->fd = -1;
+				occurred_events->events = WL_POSTMASTER_DEATH;
+				occurred_events++;
+				returned_events++;
+			}
+		}
+		else if (cur_event->events & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE))
+		{
+			Assert(cur_event->fd);
+
+			if ((cur_event->events & WL_SOCKET_READABLE) &&
+			 (cur_pollfd->revents & (POLLIN | POLLHUP | POLLERR | POLLNVAL)))
+			{
+				occurred_events->events |= WL_SOCKET_READABLE;
+			}
+
+			if ((cur_event->events & WL_SOCKET_WRITEABLE) &&
+			(cur_pollfd->revents & (POLLOUT | POLLHUP | POLLERR | POLLNVAL)))
+			{
+				occurred_events->events |= WL_SOCKET_WRITEABLE;
+			}
+
+			if (occurred_events->events != 0)
+			{
+				occurred_events->fd = cur_event->fd;
+				occurred_events++;
+				returned_events++;
+			}
+		}
+	}
+	return returned_events;
+}
+
+#elif defined(WAIT_USE_SELECT)
+
+/*
+ * Wait using select(2).
+ *
+ *
+ * On at least older linux kernels select(), in violation of POSIX,
+ * doesn't reliably return a socket as writable if closed - but we rely on
+ * that. So far all the known cases of this problem are on platforms that
+ * also provide a poll() implementation without that bug.  If we find one
+ * where that's not the case, we'll need to add a workaround.
+ */
+static inline int
+WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
+					  WaitEvent *occurred_events, int nevents)
+{
+	int			returned_events = 0;
+	int			rc;
+	WaitEvent  *cur_event;
+	fd_set		input_mask;
+	fd_set		output_mask;
+	int			hifd;
+	struct timeval tv;
+	struct timeval *tvp = NULL;
+
+	FD_ZERO(&input_mask);
+	FD_ZERO(&output_mask);
+
+	/*
+	 * Prepare input/output masks. We do so every loop iteration as there's no
+	 * entirely portable way to copy fd_sets.
+	 */
+	for (cur_event = set->events;
+		 cur_event < (set->events + set->nevents);
+		 cur_event++)
+	{
+		if (cur_event->events == WL_LATCH_SET)
+			FD_SET(cur_event->fd, &input_mask);
+		else if (cur_event->events == WL_POSTMASTER_DEATH)
+			FD_SET(cur_event->fd, &input_mask);
+		else
+		{
+			Assert(cur_event->events & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE));
+			if (cur_event->events == WL_SOCKET_READABLE)
+				FD_SET(cur_event->fd, &input_mask);
+			else if (cur_event->events == WL_SOCKET_WRITEABLE)
+				FD_SET(cur_event->fd, &output_mask);
+		}
+
+		if (cur_event->fd > hifd)
+			hifd = cur_event->fd;
+	}
+
+	/* Sleep */
+	if (cur_timeout >= 0)
+	{
+		tv.tv_sec = cur_timeout / 1000L;
+		tv.tv_usec = (cur_timeout % 1000L) * 1000L;
+		tvp = &tv;
+	}
+	rc = select(hifd + 1, &input_mask, &output_mask, NULL, tvp);
+
+	/* Check return code */
+	if (rc < 0)
+	{
+		/* EINTR is okay, otherwise complain */
+		if (errno != EINTR)
+		{
+			waiting = false;
+			ereport(ERROR,
+					(errcode_for_socket_access(),
+					 errmsg("select() failed: %m")));
+		}
+		return 0; /* retry */
+	}
+	else if (rc == 0)
+	{
+		/* timeout exceeded */
+		return -1;
+	}
+
+	/*
+	 * To associate events with select's masks, we have to check the status of
+	 * the file descriptors associated with an event; by looping through all
+	 * events.
+	 */
+	for (cur_event = set->events;
+		 cur_event < (set->events + set->nevents)
+		 && returned_events < nevents;
+		 cur_event++)
+	{
+		occurred_events->pos = cur_event->pos;
+		occurred_events->events = 0;
+
+		if (cur_event->events == WL_LATCH_SET &&
+			FD_ISSET(cur_event->fd, &input_mask))
+		{
+			/* There's data in the self-pipe, clear it. */
+			drainSelfPipe();
+
+			if (set->latch->is_set)
+			{
+				occurred_events->fd = -1;
+				occurred_events->events = WL_LATCH_SET;
+				occurred_events++;
+				returned_events++;
+			}
+		}
+		else if (cur_event->events == WL_POSTMASTER_DEATH &&
+				 FD_ISSET(cur_event->fd, &input_mask))
+		{
+			/*
+			 * According to the select(2) man page on Linux, select(2) may
+			 * spuriously return and report a file descriptor as readable,
+			 * when it's not; and presumably so can poll(2).  It's not clear
+			 * that the relevant cases would ever apply to the postmaster
+			 * pipe, but since the consequences of falsely returning
+			 * WL_POSTMASTER_DEATH could be pretty unpleasant, we take the
+			 * trouble to positively verify EOF with PostmasterIsAlive().
+			 */
+			if (!PostmasterIsAlive())
+			{
+				occurred_events->fd = -1;
+				occurred_events->events = WL_POSTMASTER_DEATH;
+				occurred_events++;
+				returned_events++;
+			}
+		}
+		else if (cur_event->events & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE))
+		{
+			Assert(cur_event->fd >= 0);
+
+			if ((cur_event->events & WL_SOCKET_READABLE) &&
+				FD_ISSET(cur_event->fd, &input_mask))
+			{
+				/* data available in socket, or EOF */
+				occurred_events->events |= WL_SOCKET_READABLE;
+			}
+
+			if ((cur_event->events & WL_SOCKET_WRITEABLE) &&
+				FD_ISSET(cur_event->fd, &output_mask))
+			{
+				/* socket is writeable, or EOF */
+				occurred_events->events |= WL_SOCKET_WRITEABLE;
+			}
+
+			if (occurred_events->events != 0)
+			{
+				occurred_events->fd = cur_event->fd;
+				occurred_events++;
+				returned_events++;
+			}
+		}
+	}
+	return returned_events;
+}
+
+#elif defined(WAIT_USE_WIN32)
+
+/*
+ * Wait using Windows' WaitForMultipleObjects().
+ *
+ * Unfortunately this will only ever return a single readiness notification at
+ * a a time.
+ */
+static inline int
+WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,
+					  WaitEvent *occurred_events, int nevents)
+{
+	int			returned_events = 0;
+	DWORD		rc;
+	WaitEvent  *cur_event;
+
+	/*
+	 * Sleep.
+	 *
+	 * Need to wait for ->nevents + 1, because signal handle is in [0].
+	 */
+	rc = WaitForMultipleObjects(set->nevents + 1, set->handles, FALSE,
+								cur_timeout);
+
+	/* Check return code */
+	if (rc == WAIT_FAILED)
+		elog(ERROR, "WaitForMultipleObjects() failed: error code %lu",
+			 GetLastError());
+	else if (rc == WAIT_TIMEOUT)
+	{
+		/* timeout exceeded */
+		return -1;
+	}
+
+	if (rc == WAIT_OBJECT_0)
+	{
+		/* Service newly-arrived signals */
+		pgwin32_dispatch_queued_signals();
+		return 0;				/* retry */
+	}
+
+	/*
+	 * With an offset of one, due to pgwin32_signal_event, the handle offset
+	 * directly corresponds to a wait event.
+	 */
+	cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];
+
+	occurred_events->pos = cur_event->pos;
+	occurred_events->events = 0;
+
+	if (cur_event->events == WL_LATCH_SET)
+	{
+		if (!ResetEvent(set->latch->event))
+			elog(ERROR, "ResetEvent failed: error code %lu", GetLastError());
+
+		if (set->latch->is_set)
+		{
+			occurred_events->fd = -1;
+			occurred_events->events = WL_LATCH_SET;
+			occurred_events++;
+			returned_events++;
+		}
+	}
+	else if (cur_event->events == WL_POSTMASTER_DEATH)
+	{
+		/*
+		 * Postmaster apparently died.  Since the consequences of falsely
+		 * returning WL_POSTMASTER_DEATH could be pretty unpleasant, we take
+		 * the trouble to positively verify this with PostmasterIsAlive(),
+		 * even though there is no known reason to think that the event could
+		 * be falsely set on Windows.
+		 */
+		if (!PostmasterIsAlive())
+		{
+			occurred_events->fd = -1;
+			occurred_events->events = WL_POSTMASTER_DEATH;
+			occurred_events++;
+			returned_events++;
+		}
+	}
+	else if (cur_event->events & (WL_SOCKET_READABLE | WL_SOCKET_WRITEABLE))
+	{
+		WSANETWORKEVENTS resEvents;
+
+		Assert(cur_event->fd);
+
+		occurred_events->fd = cur_event->fd;
+
+		ZeroMemory(&resEvents, sizeof(resEvents));
+		if (WSAEnumNetworkEvents(cur_event->fd, set->handles[cur_event->pos], &resEvents) != 0)
+			elog(ERROR, "failed to enumerate network events: error code %u",
+				 WSAGetLastError());
+		if ((cur_event->events & WL_SOCKET_READABLE) &&
+			(resEvents.lNetworkEvents & FD_READ))
+		{
+			occurred_events->events |= WL_SOCKET_READABLE;
+		}
+		if ((cur_event->events & WL_SOCKET_WRITEABLE) &&
+			(resEvents.lNetworkEvents & FD_WRITE))
+		{
+			occurred_events->events |= WL_SOCKET_WRITEABLE;
+		}
+		if (resEvents.lNetworkEvents & FD_CLOSE)
+		{
+			if (cur_event->events & WL_SOCKET_READABLE)
+				occurred_events->events |= WL_SOCKET_READABLE;
+			if (cur_event->events & WL_SOCKET_WRITEABLE)
+				occurred_events->events |= WL_SOCKET_WRITEABLE;
+		}
+
+		if (occurred_events->events != 0)
+		{
+			occurred_events++;
+			returned_events++;
+		}
+	}
+
+	return returned_events;
+}
+#endif
+
+/*
  * SetLatch uses SIGUSR1 to wake up the process waiting on the latch.
  *
  * Wake up WaitLatch, if we're waiting.  (We might not be, since SIGUSR1 is
diff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c
index 18f5e6f..d13355b 100644
--- a/src/backend/utils/init/miscinit.c
+++ b/src/backend/utils/init/miscinit.c
@@ -33,6 +33,7 @@
 
 #include "access/htup_details.h"
 #include "catalog/pg_authid.h"
+#include "libpq/libpq.h"
 #include "mb/pg_wchar.h"
 #include "miscadmin.h"
 #include "postmaster/autovacuum.h"
@@ -247,6 +248,9 @@ SwitchToSharedLatch(void)
 
 	MyLatch = &MyProc->procLatch;
 
+	if (FeBeWaitSet)
+		ModifyWaitEvent(FeBeWaitSet, 1, WL_LATCH_SET, MyLatch);
+
 	/*
 	 * Set the shared latch as the local one might have been set. This
 	 * shouldn't normally be necessary as code is supposed to check the
@@ -262,6 +266,10 @@ SwitchBackToLocalLatch(void)
 	Assert(MyProc != NULL && MyLatch == &MyProc->procLatch);
 
 	MyLatch = &LocalLatchData;
+
+	if (FeBeWaitSet)
+		ModifyWaitEvent(FeBeWaitSet, 1, WL_LATCH_SET, MyLatch);
+
 	SetLatch(MyLatch);
 }
 
diff --git a/src/include/libpq/libpq.h b/src/include/libpq/libpq.h
index 0569994..109fdf7 100644
--- a/src/include/libpq/libpq.h
+++ b/src/include/libpq/libpq.h
@@ -19,6 +19,7 @@
 
 #include "lib/stringinfo.h"
 #include "libpq/libpq-be.h"
+#include "storage/latch.h"
 
 
 typedef struct
@@ -95,6 +96,8 @@ extern ssize_t secure_raw_write(Port *port, const void *ptr, size_t len);
 
 extern bool ssl_loaded_verify_locations;
 
+WaitEventSet *FeBeWaitSet;
+
 /* GUCs */
 extern char *SSLCipherSuites;
 extern char *SSLECDHCurve;
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 3813226..c72635c 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -530,6 +530,9 @@
 /* Define to 1 if you have the syslog interface. */
 #undef HAVE_SYSLOG
 
+/* Define to 1 if you have the <sys/epoll.h> header file. */
+#undef HAVE_SYS_EPOLL_H
+
 /* Define to 1 if you have the <sys/ioctl.h> header file. */
 #undef HAVE_SYS_IOCTL_H
 
diff --git a/src/include/storage/latch.h b/src/include/storage/latch.h
index 2719498..fa66ec3 100644
--- a/src/include/storage/latch.h
+++ b/src/include/storage/latch.h
@@ -102,9 +102,23 @@ typedef struct Latch
 #define WL_TIMEOUT			 (1 << 3)
 #define WL_POSTMASTER_DEATH  (1 << 4)
 
+typedef struct WaitEventSet WaitEventSet;
+
+typedef struct WaitEvent
+{
+	int		pos;		/* position in the event data structure */
+	uint32	events;		/* tripped events */
+	int		fd;			/* fd associated with event */
+} WaitEvent;
+
 /*
  * prototypes for functions in latch.c
  */
+extern WaitEventSet *CreateWaitEventSet(MemoryContext context, int nevents);
+extern void FreeWaitEventSet(WaitEventSet *set);
+extern int AddWaitEventToSet(WaitEventSet *set, uint32 events, int fd, Latch *latch);
+extern void ModifyWaitEvent(WaitEventSet *set, int pos, uint32 events, Latch *latch);
+extern int WaitEventSetWait(WaitEventSet *set, long timeout, WaitEvent* occurred_events, int nevents);
 extern void InitializeLatchSupport(void);
 extern void InitLatch(volatile Latch *latch);
 extern void InitSharedLatch(volatile Latch *latch);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b850db0..c2511de 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2113,6 +2113,8 @@ WalSnd
 WalSndCtlData
 WalSndSendDataCallback
 WalSndState
+WaitEvent
+WaitEventSet
 WholeRowVarExprState
 WindowAgg
 WindowAggState
-- 
2.7.0.229.g701fa7f

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to