Never crash-and-restart for bgworkers without shared memory access.
The motivation for a crash and restart cycle when a backend dies is
that it might have corrupted shared memory on the way down; and we
can't recover reliably except by reinitializing everything. But that
doesn't apply to
Fix failure to set ActiveSnapshot while rewinding a cursor.
ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least). The wisdom of that is somewhat
debatable, perhaps, but
Fix failure to set ActiveSnapshot while rewinding a cursor.
ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least). The wisdom of that is somewhat
debatable, perhaps, but
Fix failure to set ActiveSnapshot while rewinding a cursor.
ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least). The wisdom of that is somewhat
debatable, perhaps, but
Fix failure to set ActiveSnapshot while rewinding a cursor.
ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least). The wisdom of that is somewhat
debatable, perhaps, but
Fix failure to set ActiveSnapshot while rewinding a cursor.
ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least). The wisdom of that is somewhat
debatable, perhaps, but
Fix failure to set ActiveSnapshot while rewinding a cursor.
ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least). The wisdom of that is somewhat
debatable, perhaps, but
Detach shared memory from bgworkers without shmem access.
Since the postmaster won't perform a crash-and-restart sequence
for background workers which don't request shared memory access,
we'd better make sure that they can't corrupt shared memory.
Patch by me, review by Tom Lane.
Branch
--
Clean up jsonb code.
The main target of this cleanup is the convertJsonb() function, but I also
touched a lot of other things that I spotted into in the process.
The new convertToJsonb() function uses an output buffer that's resized on
demand, so the code to estimate of the size of JsonbValue is
Restart bgworkers immediately after a crash-and-restart cycle.
Just as we would start bgworkers immediately after an initial startup
of the server, we should restart them immediately when reinitializing.
Petr Jelinek and Robert Haas
Branch
--
master
Details
---
When a bgworker exits, always call ReleasePostmasterChildSlot.
Commit e2ce9aa27bf20eff2d991d0267a15ea5f7024cd7 was insufficiently
well thought out. Repair.
Branch
--
master
Details
---
http://git.postgresql.org/pg/commitdiff/eee6cf1f337aa488a20e9111df446cdad770e645
Modified Files
Fix build after removing JsonbValue.estSize field.
Oops, I didn't realize that contrib/hstore refers to jsonb stuff.
Branch
--
master
Details
---
http://git.postgresql.org/pg/commitdiff/7572b7735971cd7a5ef289e133eedf7d82f79c42
Modified Files
--
contrib/hstore/hstore_io.c |
When a background worker exists with code 0, unregister it.
The previous behavior was to restart immediately, which was generally
viewed as less useful.
Petr Jelinek, with some adjustments by me.
Branch
--
master
Details
---
Thanks for cleaning this up.
On Wed, May 7, 2014 at 1:18 PM, Heikki Linnakangas
heikki.linnakan...@iki.fi wrote:
The jsonb_exists_any and jsonb_exists_all functions no longer sort the input
array. That was a premature optimization, the idea being that if there are
duplicates in the input
Avoid buffer bloat in libpq when server is consistently faster than client.
If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
Avoid buffer bloat in libpq when server is consistently faster than client.
If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
Avoid buffer bloat in libpq when server is consistently faster than client.
If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
Avoid buffer bloat in libpq when server is consistently faster than client.
If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
Avoid buffer bloat in libpq when server is consistently faster than client.
If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
Avoid buffer bloat in libpq when server is consistently faster than client.
If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
20 matches
Mail list logo