Claudio Natoli wrote:
I wrote:
But I'll happily concede the point, and prove it by knocking
up a patch for it over the weekend (unless anyone else
particularly wants to).
Occurs to me I could kill 2 birds with one stone, and also implement another
Win32 sticking point, namely the
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
As I understand it, the postmaster shared memory idea is good because
only the postmaster writes to it, and only the backends read from it.
If the HANDLE works the same way, I think you should put it into the
shared memory too, hence
Your patch has been added to the PostgreSQL unapplied patches list at:
http://momjian.postgresql.org/cgi-bin/pgpatches
I will try to apply it within the next 48 hours.
---
Claudio Natoli wrote:
For application
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
Something after 2003/11/20 enhanced the query cancel handling. Namely,
CVS tip now responds to a query cancel with a postmaster restart
canceling all queries. Could the fork/exec stuff be responsible for this?
Whoever changed this:
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
Something after 2003/11/20 enhanced the query cancel handling. Namely,
CVS tip now responds to a query cancel with a postmaster restart
canceling all queries. Could the fork/exec stuff be responsible for this?
Whoever changed this:
Done and attached.
---
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Claudio specified the attached fix, which I have applied (this time).
The ereport must vanish back into its black hole, also.
For application to HEAD, pending community review.
Drops in the CreateProcess calls for Win32 (essentially wrapping up the
fork/exec portion of the port), and fixes a handful of whitespace issues
(that no doubt I'm to blame for; my apologies) in the source base.
Cheers,
Claudio
---
Certain
Claudio Natoli wrote:
Something after 2003/11/20 enhanced the query cancel handling. Namely,
CVS tip now responds to a query cancel with a postmaster restart
canceling all queries. Could the fork/exec stuff be responsible for this?
Jan
Hi Bruce,
Claudio, where are we on this patch? I see an
Jan Wieck [EMAIL PROTECTED] writes:
Something after 2003/11/20 enhanced the query cancel handling. Namely,
CVS tip now responds to a query cancel with a postmaster restart
canceling all queries. Could the fork/exec stuff be responsible for this?
Whoever changed this:
status =
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
Something after 2003/11/20 enhanced the query cancel handling. Namely,
CVS tip now responds to a query cancel with a postmaster restart
canceling all queries. Could the fork/exec stuff be responsible for this?
Whoever changed this:
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
My guess is that this used to proc_exit the postgres backend, but now
proc_exits the postmaster, but that is only a guess.
No. This is post-fork (and had better remain so). The change causes
the sub-postmaster that has just
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
My guess is that this used to proc_exit the postgres backend, but now
proc_exits the postmaster, but that is only a guess.
No. This is post-fork (and had better remain so). The change causes
the sub-postmaster that has just
Whoever changed this:
My fault. Thanks for the catch Jan.
proc_exit(status);
Should be proc_exit(0).
My apologies,
Claudio
PS. I take it there is no regression test for this. Would this be a
difficult one to knock up?
---
Certain disclaimers and policies apply to all email sent
Bruce Momjian [EMAIL PROTECTED] writes:
When you say sub-postmaster, you mean we fork, then process the cancel
request? Seems we might need special handling in there, yea.
We fork before we even read the request. Otherwise an attacker can
trivially lock up the postmaster by sending a partial
Tom Lane wrote:
I've just noticed another serious bit of breakage in CVS tip (though
this might be fixed by Claudio's pending patch that reverts a lot of
the code rearrangements).
It's already been applied. Your push in the SubPostmaster direction was
really useful; a lot cleaner, and fixed
Claudio Natoli wrote:
The only things I've thought of so far are:
a) sticking the PID/cancel key list in shared mem [yeech]
b) rearranging the entire cancel handling to occur in the postmaster [double
yeech]
(a) seems like the lesser of the available evils (unless someone has a
bright idea
Tom Lane wrote:
Claudio Natoli wrote:
The only things I've thought of so far are:
a) sticking the PID/cancel key list in shared mem [yeech]
b) rearranging the entire cancel handling to occur in the postmaster [double
yeech]
(a) seems like the lesser of the available evils (unless
Bruce Momjian wrote:
Tom Lane wrote:
Claudio Natoli wrote:
The only things I've thought of so far are:
a) sticking the PID/cancel key list in shared mem [yeech]
b) rearranging the entire cancel handling to occur in the postmaster [double
yeech]
(a) seems like the lesser of the available evils
Jan Wieck [EMAIL PROTECTED] writes:
It doesn't hurt to keep the locations and code as much in sync as
possible. I think Tom's idea to move the information into the PGPROC
entry is the winner and does not need any OS specific handling.
Actually, on further reflection a separate array to store
Tom Lane writes:
Actually, on further reflection a separate array to store PIDs and
cancel keys is probably a better idea.
[snip]
I still think it's unnecessary to make a separate shmem segment for it,
though.
Why is that? Don't we need the backends to have access to it to do a cancel
Claudio Natoli wrote:
Tom Lane writes:
Actually, on further reflection a separate array to store PIDs and
cancel keys is probably a better idea.
[snip]
I still think it's unnecessary to make a separate shmem segment for it,
though.
Why is that? Don't we need the backends to have
I think they are saying put the cancel key inside the existing shared
memory segment.
Ok. Thanks.
I don't know when we actually attach to the main shared
memory sigment in the child, but it would have to be sooner than when we
need the cancel key.
Yes. Currently, it happens too late.
Bruce Momjian wrote:
Claudio Natoli wrote:
Tom Lane writes:
Actually, on further reflection a separate array to store PIDs and
cancel keys is probably a better idea.
[snip]
I still think it's unnecessary to make a separate shmem segment for it,
though.
Why is that? Don't we need the backends
Claudio Natoli wrote:
I think they are saying put the cancel key inside the existing shared
memory segment.
Ok. Thanks.
I don't know when we actually attach to the main shared
memory sigment in the child, but it would have to be sooner than when we
need the cancel key.
Yes.
Claudio Natoli [EMAIL PROTECTED] writes:
Just to be clear, this would involve turning the BackendList dlllist into an
array in shared memory, right? If so, a couple of questions:
Per Jan's comment, there is no need to mess with the existing
datastructure. I'd think more of *copying* the dllist
Bruce Momjian [EMAIL PROTECTED] writes:
Uh, why would you need more than maxbackends? Can't it be indexed by
slot number --- each backend has a slot? Maybe I am missing something.
The postmaster has no way to know what slot number each backend will
get. For that matter, a sub-postmaster
Tom Lane:
Per Jan's comment, there is no need to mess with the existing
datastructure. I'd think more of *copying* the dllist into some array
in shared memory. This is code that would only need to exist in the
Windows port.
(I thought Jan was referring to the PGPROC struct)
This just
Tom Lane wrote:
I think the simplest way to make this work is to use an array that's
2*MaxBackend items long (corresponding to the max number of children the
postmaster will fork).
Actually, now that I think about it, is even that big enough. There is a
reason BackendList is a list. In
Claudio Natoli wrote:
Tom Lane:
Per Jan's comment, there is no need to mess with the existing
datastructure. I'd think more of *copying* the dllist into some array
in shared memory. This is code that would only need to exist in the
Windows port.
(I thought Jan was referring to the
Claudio Natoli [EMAIL PROTECTED] writes:
Tom Lane wrote:
I think the simplest way to make this work is to use an array that's
2*MaxBackend items long (corresponding to the max number of children the
postmaster will fork).
Actually, now that I think about it, is even that big enough. There
Claudio, where are we on this patch? I see an even newer version in the
archives:
http://archives.postgresql.org/pgsql-patches/2003-12/msg00361.php
However, I have never seen that version, and Google doesn't show it
either:
Claudio Natoli wrote:
Hi Bruce,
Claudio, where are we on this patch? I see an even newer version in the
archives:
http://archives.postgresql.org/pgsql-patches/2003-12/msg00361.php
[Weird you and Google groups missed it!]
No, seems only _I_ missed it. The google display
Your patch has been added to the PostgreSQL unapplied patches list at:
http://momjian.postgresql.org/cgi-bin/pgpatches
I will try to apply it within the next 48 hours.
---
Claudio Natoli wrote:
[patch edited +
[minor corrections (some duplication in arg passing); still under
discussion/review]
Cheers,
Claudio
-Original Message-
From: Claudio Natoli [mailto:[EMAIL PROTECTED]
Sent: Saturday, 27 December 2003 9:27 PM
To: 'Tom Lane'
Cc: '[EMAIL PROTECTED]'
Subject: Re: [PATCHES] fork/exec
Claudio Natoli [EMAIL PROTECTED] writes:
I don't follow your thinking here. The information that has to be
reloaded *must* be passed across the fork/exec boundary in
either case.
Yes. But not all of it.
I'll throw that same argument back at you: some of the information
needed for
Claudio Natoli [EMAIL PROTECTED] writes:
This has required some reworking of the existing code base, particularly to
BackendFork (which now actually does the fork()). As such, I've been
anticipating that this will be the most controversial of the fork/exec
patches, so critique away :-)
You
I said:
We should either find a way to make the fork/exec path more like the
existing code, or bite the bullet and change them both.
It occurs to me that you probably need some concrete suggestions about
what to do. Here's one.
There are really three separate sections in this code: the
Thanks for your comments Tom.
Tom Lane writes:
Claudio Natoli [EMAIL PROTECTED] writes:
This has required some reworking of the existing code base, particularly
to
BackendFork (which now actually does the fork()). As such, I've been
anticipating that this will be the most controversial
Claudio Natoli [EMAIL PROTECTED] writes:
Ok. So, in conclusion we are in agreement on at least one thing: the current
patch sucks.
Good, I'm glad we see eye to eye on that ;-)
Here's the critical difference in our thinking:
ISTM that we'll have to go to a lot of work to get the
Patch applied. Thanks.
---
Claudio Natoli wrote:
For application to HEAD, pending community review.
Continued rearrangement to permit pgstat + BootstrapMain processes to be
fork/exec'd, in the same mode as the
-
From: Bruce Momjian [mailto:[EMAIL PROTECTED]
Sent: Wednesday, 17 December 2003 11:12 AM
To: Claudio Natoli
Cc: '[EMAIL PROTECTED]'
Subject: Re: [PATCHES] fork/exec patch
Patch withdrawn. Author will resubmit new version
For application to HEAD, pending community review.
Continued rearrangement to permit pgstat + BootstrapMain processes to be
fork/exec'd, in the same mode as the previous patch for backends.
Cheers,
Claudio
---
Certain disclaimers and policies apply to all email sent from Memetrics.
For the
Claudio Natoli wrote:
[Thought I replied to this already]
I am now thinking we have to remove pgsql/data/pgsql_tmp
unconditionally:
[snip]
The reason is that if they stop a postmaster that is
fork/exec, install
a non-exec postmaster, and restart, we should still clear out that
, incorporating Neil Conway's comments and some minor
corrections.
-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED]
Sent: Wednesday, 17 December 2003 11:12 AM
To: Claudio Natoli
Cc: '[EMAIL PROTECTED]'
Subject: Re: [PATCHES] fork/exec patch
Patch
[Thanks to Tom + Bruce]
For the remaining comments...
The number of FIXME or This is ugly comments doesn't ease my mind
about this patch :-) How many of these issues do you plan to resolve?
All of them, as we go along. Treat this as a first step.
- break;
Tom Lane [EMAIL PROTECTED] writes:
That one I can answer --- it's a bootstrapping issue: we can't use
LWLocks until we have a PGProc (*MyProc), and we can't set that up
without looking in the ShmemIndex for the related data structures.
So ShmemIndex needs to use a more primitive lock type.
Hi Neil,
No. This isn't necessary (and what action would it take in any
case?).
It should write a log message. I'm not sure why this /shouldn't/ be
done: if an operation fails, we should log that failure. This is
standard practise.
Fair enough. Will do (although, I'd point out that
Neil Conway [EMAIL PROTECTED] writes:
My next question would have been to ask whether switching to a
spinlock here will be a performance problem. In looking at the code,
it seems we only hold the ShmemIndexLock for a long time (hundreds of
instructions multiple system calls) while
Claudio Natoli wrote:
(circa line 335 of ipc/shmem.c:)
[snip]
Doesn't this function still acquire ShmemIndexLock? (i.e. why was this
comment changed?)
AFAICS this is just whitespace differences.
With the exception of that missing break (Bruce, I guess it goes without
saying, but
Patch withdrawn. Author will resubmit new version.
---
Claudio Natoli wrote:
This patch is the next step towards (re)allowing fork/exec.
Bruce, I've cleaned up the parts we discussed, and, pending objections from
Claudio Natoli wrote:
Resubmission, incorporating Neil Conway's comments and some minor
corrections.
I am now thinking we have to remove pgsql/data/pgsql_tmp
unconditionally:
***
*** 1217,1224
{
while ((db_de =
Your patch has been added to the PostgreSQL unapplied patches list at:
http://momjian.postgresql.org/cgi-bin/pgpatches
I will try to apply it within the next 48 hours.
---
Claudio Natoli wrote:
This patch is
Claudio Natoli [EMAIL PROTECTED] writes:
This patch is the next step towards (re)allowing fork/exec.
I've included a few comments on the patch below. Unfortunately I only
had time to briefly look over the code...
Why did you change ShmemIndexLock from an LWLock to a spinlock?
The number of
Neil Conway wrote:
+ /*
+ * The following need to be available to the read/write_backend_variables
+ * functions
+ */
+ extern XLogRecPtr RedoRecPtr;
+ extern XLogwrtResult LogwrtResult;
+ extern slock_t *ShmemLock;
+ extern slock_t *ShmemIndexLock;
+ extern void *ShmemIndexAlloc;
+
Bruce Momjian [EMAIL PROTECTED] writes:
Let me add that Claudio is doing a fantastic job on this. The
changes are minimal and clean. I think the writing of a per-backend
temp file has allowed this patch to be smaller than it might have
been.
Did we REALLY conclude that the best way to work
Bruce Momjian [EMAIL PROTECTED] writes:
I don't think we ever discussed it, but it seemed logical and a minimal
change to the code. We already have a GUC write of non-default values
for exec and no one had issues with that.
For the record, I think that is ugly as well :-)
Anyway, I'm not
On Sun, 14 Dec 2003, Bruce Momjian wrote:
change to the code. We already have a GUC write of non-default values
for exec and no one had issues with that. Of course, this one is
per-backend.
Yea, we could use shared memory for this too, but I don't see a problem
with using the file
Dennis Bjorklund wrote:
On Sun, 14 Dec 2003, Bruce Momjian wrote:
change to the code. We already have a GUC write of non-default values
for exec and no one had issues with that. Of course, this one is
per-backend.
Yea, we could use shared memory for this too, but I don't see a
On Sun, 14 Dec 2003, Bruce Momjian wrote:
Why not use an anonymous pipe to send data from the parent to the child
process?
Doesn't that require the postmaster to stay around to feed that
information into the pipe or can the postmaster just shove the data and
continue on, and how do the
Hi all,
Dennis Bjorklund wrote:
Also has to work on Unix too for testing.
Everything can not work in unix, CreateProcess() and fork()
are different.
True (but CreateProcess and fork followed by exec are pretty close). I
think what Bruce is implying is that, ideally, we'd like to keep
On Mon, 15 Dec 2003, Claudio Natoli wrote:
Moreover, in general, how do we handle things like this? IMHO, I'd rather
live with a few kludges (that don't impact the *nix code) until the Windows
port is actually a reality
As long as it does not hurt the unix code it's not a big problem as I see
Bruce Momjian [EMAIL PROTECTED] writes:
I don't think we ever discussed it, but it seemed logical and a minimal
change to the code. We already have a GUC write of non-default values
for exec and no one had issues with that.
You can hardly claim that no one had issues with that. I complained
Alvaro Herrera [EMAIL PROTECTED] writes:
On Sun, Dec 14, 2003 at 06:53:22PM -0500, Tom Lane wrote:
You can hardly claim that no one had issues with that.
Don't the FSM and the system catalog cache use a similar mechanism?
FSM uses a backing file to hold information over a database shutdown
Claudio Natoli wrote:
For example, couldn't we write this data into a particular location in
shared memory, and then pass that location to the child? That is still
ugly, slow, and prone to failure (shmem being statically sized), but
ISTM that the proposed implementation already possesses
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
I don't think we ever discussed it, but it seemed logical and a minimal
change to the code. We already have a GUC write of non-default values
for exec and no one had issues with that.
You can hardly claim that no one had issues
Bruce Momjian [EMAIL PROTECTED] writes:
Agreed, added to the Win32 status page:
* remove per-backend parameter file and move into shared memory
[itch] I'm not sure that's an answer either; see my comments about how
the postmaster shouldn't depend on the contents of shared memory being
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Agreed, added to the Win32 status page:
* remove per-backend parameter file and move into shared memory
[itch] I'm not sure that's an answer either; see my comments about how
the postmaster shouldn't depend on the contents of
67 matches
Mail list logo