On 11/01/2018 07:40 PM, Andres Freund wrote: > Hi, > > On 2018-11-01 19:33:39 +0100, Tomas Vondra wrote: >> In theory, simulating such global limit should be possible using a bit >> of shared memory for the current total, per-process counter and probably >> some simple abort handling (say, just like contrib/openssl does using >> ResourceOwner). > > Right. I don't think you even need something resowner like, given that > anything using threads better make it absolutely absolutely impossible > that an error can escape. >
True. Still, I wonder if the process can die in a way that would fail to update the counter. > >> A better solution might be to start a bgworker managing a connection >> pool and forward the requests to it using IPC (and enforce the thread >> count limit there). > > That doesn't really seem feasible for cases like this - after all, you'd > only use threads to work on individual rows if you wanted to parallelize > relatively granular per-row work or such. Adding cross-process IPC seems > like it'd make that perform badly. > I think that very much depends on how expensive the tasks handled by the threads are. It may still be cheaper than a reasonable IPC, and if you don't create/destroy threads, that also saves quite a bit of time. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services