On 2016-01-14 10:39:55 -0500, Robert Haas wrote:
> Doesn't this code make it possible for the self-pipe to fill up,
> self-deadlocking the process?  Suppose we repeatedly enter
> WaitLatchOrSocket().  Each time we do, just after waiting = true is
> set, somebody sets the latch.  We handle the signal and put a byte
> into the pipe.  Returning from the signal handler, we then notice that
> is_set is true and return at once, without draining the pipe.  Repeat
> until something bad happens.

Should be fine because the self-pipe is marked as non-blocking
        if (fcntl(pipefd[1], F_SETFL, O_NONBLOCK) < 0)
                elog(FATAL, "fcntl() failed on write-end of self-pipe: %m");
and sendSelfPipeByte accepts the blocking case as success

                /*
                 * If the pipe is full, we don't need to retry, the data that's 
there
                 * already is enough to wake up WaitLatch.
                 */
                if (errno == EAGAIN || errno == EWOULDBLOCK)
                        return;


> > 0004: Support using epoll as the polling primitive in unix_latch.c.
> >       ~3% on high qps workloads, massive scalability improvements (x3)
> >       on very large machines.
> >
> > With 0004 obviously being the relevant bit for this thread. I verified
> > that using epoll addresses the performance problem, using the hardware
> > the OP noticed the performance problem on.
> 
> +                /*
> +                 * Unnecessarily pass data for delete due to bug errorneously
> +                 * requiring it in the past.
> +                 */
> 
> This is pretty vague.  And it has a spelling mistake.

Will add a reference to the manpage (where that requirement is coming
from).

> Further down, nodified -> notified.
> 
> +        if (wakeEvents != latch->lastmask || latch->lastwatchfd != sock)
> 
> I don't like this very much.

Yea, me neither, which is why I called it out... I think it's not too
likely to cause problems in practice though. But I think changing the
API makes sense, so the likelihood shouldn't be a relevant issue.

> Incidentally, if we're going to whack around the latch API, it would
> be nice to pick a design which wouldn't be too hard to extend to
> waiting on multiple sockets.

Hm. That seems likely to make usage harder for users of the API. So it
seems like it'd make sense to provide a simpler version anyway, for the
majority of users.

So, I'm wondering how we'd exactly use a hyptothetical
SetSocketToWaitOn, or SetSocketsToWaitOn (or whatever). I mean it can
make a fair bit of sense to sometimes wait on MyLatch/port->sock and
sometimes on MyLatch/fdw connections. The simple proposed code would
change the epoll set whenever switching between both, but with
SetSocketsToWaitOn you'd probably end up switching this much more often?

One way to address that would be to create a 'latch wait' datastructure,
that'd then contain the epoll fd/win32 wait events/... That way you
could have one 'LatchWait' for latch + client socket and one for latch +
fdw sockets.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to