Alvaro Herrera wrote:
Excerpts from Tom Lane's message of lun nov 15 02:41:40 -0300 2010:
I believe also that there are probably race conditions in several of
the steps you listed; in particular there is certainly a risk involved
in changing the database-we-advertise-being-connected-to
Robert,
On 11/15/2010 05:39 AM, Robert Haas wrote:
I've spent a few hours pouring over the source code with
coarse-toothed comb, trying to figure out just exactly what might
break if we changed MyDatabaseId after backend startup time, or in
other words, allowed a backend to unbind from the
On Wednesday 17 November 2010 11:04:04 Markus Wanner wrote:
Robert,
On 11/15/2010 05:39 AM, Robert Haas wrote:
I've spent a few hours pouring over the source code with
coarse-toothed comb, trying to figure out just exactly what might
break if we changed MyDatabaseId after backend startup
Andreas,
On 11/17/2010 11:38 AM, Andres Freund wrote:
Well, one could optimize most of the resetting away if the the old
MyDatabaseId and the new one are the same - an optimization which is hardly
possible with forking new backends.
Uh? Why not simply re-use the same backend, then? Or do
On Wednesday 17 November 2010 11:58:33 Markus Wanner wrote:
Andreas,
On 11/17/2010 11:38 AM, Andres Freund wrote:
Well, one could optimize most of the resetting away if the the old
MyDatabaseId and the new one are the same - an optimization which is
hardly possible with forking new
On 11/17/2010 12:09 PM, Andres Freund wrote:
I am thinking of a connection-pooler like setup. Quite often your main-load
goes towards a single database - in that situation you don't have to reset
the
database id most of the time.
Okay, so that's what I'd call a connection-reset or
Excerpts from Markus Wanner's message of mié nov 17 07:04:04 -0300 2010:
Thoughts?
The question obviously is whether or not this is faster than just
terminating one backend and starting a new one. Which basically costs an
additional termination and re-creation of a process (i.e. fork())
On 11/17/2010 01:27 PM, Alvaro Herrera wrote:
I don't think it's a speed thing only. It would be a great thing to
have in autovacuum, for example, where we have constant problem reports
because the system failed to fork a new backend. If we could simply
reuse an already existing one, it
Excerpts from Markus Wanner's message of mié nov 17 09:57:18 -0300 2010:
On 11/17/2010 01:27 PM, Alvaro Herrera wrote:
I don't think it's a speed thing only. It would be a great thing to
have in autovacuum, for example, where we have constant problem reports
because the system failed to
On 11/17/2010 02:19 PM, Alvaro Herrera wrote:
Well, the autovacuum mechanism involves a lot of back-and-forth between
launcher and postmaster, which includes some signals, a fork() and
backend initialization. The failure possibilities are endless.
Fork failure communication is similarly
Markus Wanner mar...@bluegap.ch writes:
On 11/17/2010 02:19 PM, Alvaro Herrera wrote:
Well, the autovacuum mechanism involves a lot of back-and-forth between
launcher and postmaster, which includes some signals, a fork() and
backend initialization. The failure possibilities are endless.
On 11/17/2010 04:25 PM, Tom Lane wrote:
I'm afraid that any such change would trade a visible, safe failure
mechanism (no avworker) for invisible, impossible-to-debug data
corruption scenarios (due to failure to reset some bit of cached state).
It certainly won't give me any warm fuzzy feeling
On Wed, Nov 17, 2010 at 5:04 AM, Markus Wanner mar...@bluegap.ch wrote:
The question obviously is whether or not this is faster than just
terminating one backend and starting a new one.
I agree.
Which basically costs an
additional termination and re-creation of a process (i.e. fork())
On Wed, Nov 17, 2010 at 4:52 PM, Robert Haas robertmh...@gmail.com wrote:
However, that test doesn't capture everything. For example, imagine a
connection pooler sitting in front of PG. Rebinding to a new database
means disconnecting a TCP connection and establishing a new one.
Switching
On Wed, Nov 17, 2010 at 12:42 PM, Greg Stark gsst...@mit.edu wrote:
On Wed, Nov 17, 2010 at 4:52 PM, Robert Haas robertmh...@gmail.com wrote:
However, that test doesn't capture everything. For example, imagine a
connection pooler sitting in front of PG. Rebinding to a new database
means
On Wed, Nov 17, 2010 at 6:33 PM, Robert Haas robertmh...@gmail.com wrote:
I think you're missing the point. If we switch databases, all cached
relations and plans have to be flushed anyway. We're talking about
what might NOT need to be flushed on switching databases.
Oh sorry, yes, I missed
Excerpts from Tom Lane's message of lun nov 15 02:41:40 -0300 2010:
I believe also that there are probably race conditions in several of
the steps you listed; in particular there is certainly a risk involved
in changing the database-we-advertise-being-connected-to versus a
concurrent DROP
On Mon, Nov 15, 2010 at 12:41 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Looking through the code, it appears to me that we'd need to do the
following (not necessarily in this order):
Don't forget
9. Unload loadable modules that do not exist according
I've spent a few hours pouring over the source code with
coarse-toothed comb, trying to figure out just exactly what might
break if we changed MyDatabaseId after backend startup time, or in
other words, allowed a backend to unbind from the database to which it
was originally bound and rebind to a
Robert Haas robertmh...@gmail.com writes:
Looking through the code, it appears to me that we'd need to do the
following (not necessarily in this order):
Don't forget
9. Unload loadable modules that do not exist according to the new
database's catalogs; eg we don't want postgis trying to run
20 matches
Mail list logo